Title: | Interface to 'fastai' |
---|---|
Description: | The 'fastai' <https://docs.fast.ai/index.html> library simplifies training fast and accurate neural networks using modern best practices. It is based on research in to deep learning best practices undertaken at 'fast.ai', including 'out of the box' support for vision, text, tabular, audio, time series, and collaborative filtering models. |
Authors: | Turgut Abdullayev [ctb, cre, cph, aut] |
Maintainer: | Turgut Abdullayev <[email protected]> |
License: | Apache License 2.0 |
Version: | 2.2.2 |
Built: | 2024-11-07 05:29:02 UTC |
Source: | https://github.com/eagerai/fastai |
Multiply
## S3 method for class 'fastai.torch_core.TensorMask' a * b
## S3 method for class 'fastai.torch_core.TensorMask' a * b
a |
tensor |
b |
tensor |
tensor
Div
## S3 method for class 'fastai.torch_core.TensorMask' a / b
## S3 method for class 'fastai.torch_core.TensorMask' a / b
a |
tensor |
b |
tensor |
tensor
Logical_and
## S3 method for class 'fastai.torch_core.TensorMask' x & y
## S3 method for class 'fastai.torch_core.TensorMask' x & y
x |
tensor |
y |
tensor |
tensor
Floor divide
## S3 method for class 'fastai.torch_core.TensorMask' x %/% y
## S3 method for class 'fastai.torch_core.TensorMask' x %/% y
x |
tensor |
y |
tensor |
tensor
Floor mod
## S3 method for class 'fastai.torch_core.TensorMask' x %% y
## S3 method for class 'fastai.torch_core.TensorMask' x %% y
x |
tensor |
y |
tensor |
tensor
The assignment has to be used for safe modification of the values inside tensors/layers
left %f% right
left %f% right
left |
left side object |
right |
right side object |
None
Pow
## S3 method for class 'fastai.torch_core.TensorMask' a ^ b
## S3 method for class 'fastai.torch_core.TensorMask' a ^ b
a |
tensor |
b |
tensor |
tensor
Add
## S3 method for class 'fastai.torch_core.TensorMask' a + b
## S3 method for class 'fastai.torch_core.TensorMask' a + b
a |
tensor |
b |
tensor |
tensor
Add layers to Sequential
## S3 method for class 'torch.nn.modules.container.Sequential' a + b
## S3 method for class 'torch.nn.modules.container.Sequential' a + b
a |
sequential model |
b |
layer |
model
Less
## S3 method for class 'fastai.torch_core.TensorMask' a < b
## S3 method for class 'fastai.torch_core.TensorMask' a < b
a |
tensor |
b |
tensor |
tensor
Less or equal
## S3 method for class 'fastai.torch_core.TensorMask' a <= b
## S3 method for class 'fastai.torch_core.TensorMask' a <= b
a |
tensor |
b |
tensor |
tensor
Equal
## S3 method for class 'fastai.torch_core.TensorImage' a == b
## S3 method for class 'fastai.torch_core.TensorImage' a == b
a |
tensor |
b |
tensor |
tensor
Equal
## S3 method for class 'fastai.torch_core.TensorMask' a == b
## S3 method for class 'fastai.torch_core.TensorMask' a == b
a |
tensor |
b |
tensor |
tensor
Equal
## S3 method for class 'torch.Tensor' a == b
## S3 method for class 'torch.Tensor' a == b
a |
tensor |
b |
tensor |
tensor
Greater
## S3 method for class 'fastai.torch_core.TensorMask' a > b
## S3 method for class 'fastai.torch_core.TensorMask' a > b
a |
tensor |
b |
tensor |
tensor
Greater or equal
## S3 method for class 'fastai.torch_core.TensorMask' a >= b
## S3 method for class 'fastai.torch_core.TensorMask' a >= b
a |
tensor |
b |
tensor |
tensor
Abs
## S3 method for class 'torch.Tensor' abs(x)
## S3 method for class 'torch.Tensor' abs(x)
x |
tensor |
tensor
Abs
## S3 method for class 'fastai.torch_core.TensorMask' abs(x)
## S3 method for class 'fastai.torch_core.TensorMask' abs(x)
x |
tensor, e.g.: tensor(-1:-10) |
tensor
Stores predictions and targets on CPU in accumulate to perform final calculations with 'func'.
AccumMetric( func, dim_argmax = NULL, activation = "no", thresh = NULL, to_np = FALSE, invert_arg = FALSE, flatten = TRUE, ... )
AccumMetric( func, dim_argmax = NULL, activation = "no", thresh = NULL, to_np = FALSE, invert_arg = FALSE, flatten = TRUE, ... )
func |
function |
dim_argmax |
dimension argmax |
activation |
activation |
thresh |
threshold point |
to_np |
to matrix or not |
invert_arg |
invert arguments |
flatten |
flatten |
... |
additional arguments to pass |
None
Compute accuracy with 'targ' when 'pred' is bs * n_classes
accuracy(inp, targ, axis = -1)
accuracy(inp, targ, axis = -1)
inp |
predictions |
targ |
targets |
axis |
axis |
None
Compute accuracy when 'inp' and 'targ' are the same size.
accuracy_multi(inp, targ, thresh = 0.5, sigmoid = TRUE)
accuracy_multi(inp, targ, thresh = 0.5, sigmoid = TRUE)
inp |
predictions |
targ |
targets |
thresh |
threshold point |
sigmoid |
sigmoid |
None
Compute accuracy after expanding 'y_true' to the size of 'y_pred'.
accuracy_thresh_expand(y_pred, y_true, thresh = 0.5, sigmoid = TRUE)
accuracy_thresh_expand(y_pred, y_true, thresh = 0.5, sigmoid = TRUE)
y_pred |
predictions |
y_true |
actuals |
thresh |
threshold point |
sigmoid |
sigmoid function |
None
Step for Adam with 'lr' on 'p'
adam_step(p, lr, mom, step, sqr_mom, grad_avg, sqr_avg, eps, ...)
adam_step(p, lr, mom, step, sqr_mom, grad_avg, sqr_avg, eps, ...)
p |
p |
lr |
learning rate |
mom |
momentum |
step |
step |
sqr_mom |
sqr momentum |
grad_avg |
grad average |
sqr_avg |
sqr average |
eps |
epsilon |
... |
additional arguments to pass |
None
Adaptive_pool
adaptive_pool(pool_type)
adaptive_pool(pool_type)
pool_type |
pooling type |
Nonee
nn()$AdaptiveAvgPool layer for 'ndim'
AdaptiveAvgPool(sz = 1, ndim = 2)
AdaptiveAvgPool(sz = 1, ndim = 2)
sz |
size |
ndim |
dimension size |
Layer that concats 'AdaptiveAvgPool1d' and 'AdaptiveMaxPool1d'
AdaptiveConcatPool1d(size = NULL)
AdaptiveConcatPool1d(size = NULL)
size |
output size |
None
Layer that concats 'AdaptiveAvgPool2d' and 'AdaptiveMaxPool2d'
AdaptiveConcatPool2d(size = NULL)
AdaptiveConcatPool2d(size = NULL)
size |
output size |
None
Switcher that goes back to generator/critic when the loss goes below 'gen_thresh'/'crit_thresh'.
AdaptiveGANSwitcher(gen_thresh = NULL, critic_thresh = NULL)
AdaptiveGANSwitcher(gen_thresh = NULL, critic_thresh = NULL)
gen_thresh |
generator threshold |
critic_thresh |
discriminator threshold |
None
Expand the 'target' to match the 'output' size before applying 'crit'.
AdaptiveLoss(crit)
AdaptiveLoss(crit)
crit |
critic |
Loss object
Add
Sinh
## S3 method for class 'torch.Tensor' a + b ## S3 method for class 'torch.Tensor' sinh(x)
## S3 method for class 'torch.Tensor' a + b ## S3 method for class 'torch.Tensor' sinh(x)
a |
tensor |
b |
tensor |
x |
tensor |
tensor
tensor
Helper function that adds trigonometric date/time features to a date in the column 'field_name' of 'df'.
add_cyclic_datepart( df, field_name, prefix = NULL, drop = TRUE, time = FALSE, add_linear = FALSE )
add_cyclic_datepart( df, field_name, prefix = NULL, drop = TRUE, time = FALSE, add_linear = FALSE )
df |
df |
field_name |
field_name |
prefix |
prefix |
drop |
drop |
time |
time |
add_linear |
add_linear |
data frame
Helper function that adds columns relevant to a date in the column 'field_name' of 'df'.
add_datepart(df, field_name, prefix = NULL, drop = TRUE, time = FALSE)
add_datepart(df, field_name, prefix = NULL, drop = TRUE, time = FALSE)
df |
df |
field_name |
field_name |
prefix |
prefix |
drop |
drop |
time |
time |
data frame
Add 'n_dim' channels at the end of the input.
AddChannels(n_dim)
AddChannels(n_dim)
n_dim |
number of dimensions |
Adds noise of specified color and level to the audio signal
AddNoise(noise_level = 0.05, color = 0)
AddNoise(noise_level = 0.05, color = 0)
noise_level |
noise level |
color |
int, color |
None
Aaffine_coord
affine_coord( x, mat = NULL, coord_tfm = NULL, sz = NULL, mode = "bilinear", pad_mode = "reflection", align_corners = TRUE, ... )
affine_coord( x, mat = NULL, coord_tfm = NULL, sz = NULL, mode = "bilinear", pad_mode = "reflection", align_corners = TRUE, ... )
x |
tensor |
mat |
mat |
coord_tfm |
coordinate tfm |
sz |
sz |
mode |
mode |
pad_mode |
padding mode |
align_corners |
align corners |
... |
additional arguments |
None
Affline mat
affine_mat(...)
affine_mat(...)
... |
parameters to pass |
None
Combine and apply affine and coord transforms
AffineCoordTfm( aff_fs = NULL, coord_fs = NULL, size = NULL, mode = "bilinear", pad_mode = "reflection", mode_mask = "nearest", align_corners = NULL )
AffineCoordTfm( aff_fs = NULL, coord_fs = NULL, size = NULL, mode = "bilinear", pad_mode = "reflection", mode_mask = "nearest", align_corners = NULL )
aff_fs |
aff fs |
coord_fs |
coordinate fs |
size |
size |
mode |
mode |
pad_mode |
padding mode |
mode_mask |
mode mask |
align_corners |
align corners |
None
AlexNet model architecture
alexnet(pretrained = FALSE, progress)
alexnet(pretrained = FALSE, progress)
pretrained |
pretrained or not |
progress |
to see progress bar or not |
"One weird trick..." <https://arxiv.org/abs/1404.5997>
model
## Not run: alexnet(pretrained = FALSE, progress = TRUE) ## End(Not run)
## Not run: alexnet(pretrained = FALSE, progress = TRUE) ## End(Not run)
Apply perspective tranfom on 'coords' with 'coeffs'
apply_perspective(coords, coeffs)
apply_perspective(coords, coeffs)
coords |
coordinates |
coeffs |
coefficient |
None
Average Precision for single-label binary classification problems
APScoreBinary( axis = -1, average = "macro", pos_label = 1, sample_weight = NULL )
APScoreBinary( axis = -1, average = "macro", pos_label = 1, sample_weight = NULL )
axis |
axis |
average |
average |
pos_label |
pos_label |
sample_weight |
sample_weight |
None
Average Precision for multi-label classification problems
APScoreMulti( sigmoid = TRUE, average = "macro", pos_label = 1, sample_weight = NULL )
APScoreMulti( sigmoid = TRUE, average = "macro", pos_label = 1, sample_weight = NULL )
sigmoid |
sigmoid |
average |
average |
pos_label |
pos_label |
sample_weight |
sample_weight |
None
As_array
as_array(tensor)
as_array(tensor)
tensor |
tensor object |
array
get all allowed audio extensions
audio_extensions()
audio_extensions()
vector
A 'TransformBlock' for audios
AudioBlock( cache_folder = NULL, sample_rate = 16000, force_mono = TRUE, crop_signal_to = NULL )
AudioBlock( cache_folder = NULL, sample_rate = 16000, force_mono = TRUE, crop_signal_to = NULL )
cache_folder |
cache folder |
sample_rate |
sample rate |
force_mono |
force mono or not |
crop_signal_to |
int, crop signal |
None
Build a 'AudioBlock' from a 'path' and caches some intermediary results
AudioBlock_from_folder( path, sample_rate = 16000, force_mono = TRUE, crop_signal_to = NULL )
AudioBlock_from_folder( path, sample_rate = 16000, force_mono = TRUE, crop_signal_to = NULL )
path |
directory, path |
sample_rate |
sample rate |
force_mono |
force mono or not |
crop_signal_to |
int, crop signal |
None
Create 'get_audio_files' partial function that searches path suffix 'suf'
AudioGetter(suf = "", recurse = TRUE, folders = NULL)
AudioGetter(suf = "", recurse = TRUE, folders = NULL)
suf |
suffix |
recurse |
recursive or not |
folders |
vector, folders |
and passes along 'kwargs', only in 'folders', if specified.
None
AudioSpectrogram module
AudioSpectrogram()
AudioSpectrogram()
None
Semantic torch tensor that represents an audio.
AudioTensor(x, sr = NULL)
AudioTensor(x, sr = NULL)
x |
tensor |
sr |
sr |
tensor
Creates audio tensor from file
AudioTensor_create( fn, cache_folder = NULL, frame_offset = 0, num_frames = -1, normalize = TRUE, channels_first = TRUE )
AudioTensor_create( fn, cache_folder = NULL, frame_offset = 0, num_frames = -1, normalize = TRUE, channels_first = TRUE )
fn |
function |
cache_folder |
cache folder |
frame_offset |
offset |
num_frames |
number of frames |
normalize |
apply normalization or not |
channels_first |
channels first/last |
None
Transform to create MFCC features from audio tensors.
AudioToMFCC( sample_rate = 16000, n_mfcc = 40, dct_type = 2, norm = "ortho", log_mels = FALSE, melkwargs = NULL )
AudioToMFCC( sample_rate = 16000, n_mfcc = 40, dct_type = 2, norm = "ortho", log_mels = FALSE, melkwargs = NULL )
sample_rate |
sample rate |
n_mfcc |
number of mel-frequency cepstral coefficients |
dct_type |
dct type |
norm |
normalization type |
log_mels |
apply log to mels |
melkwargs |
additional arguments for mels |
None
Creates AudioToMFCC from configuration file
AudioToMFCC_from_cfg(audio_cfg)
AudioToMFCC_from_cfg(audio_cfg)
audio_cfg |
audio configuration |
None
Creates AudioToSpec from configuration file
AudioToSpec_from_cfg(audio_cfg)
AudioToSpec_from_cfg(audio_cfg)
audio_cfg |
audio configuration |
None
Utility func to easily create a list of flip, rotate, zoom, warp, lighting transforms.
aug_transforms( mult = 1, do_flip = TRUE, flip_vert = FALSE, max_rotate = 10, min_zoom = 1, max_zoom = 1.1, max_lighting = 0.2, max_warp = 0.2, p_affine = 0.75, p_lighting = 0.75, xtra_tfms = NULL, size = NULL, mode = "bilinear", pad_mode = "reflection", align_corners = TRUE, batch = FALSE, min_scale = 1 )
aug_transforms( mult = 1, do_flip = TRUE, flip_vert = FALSE, max_rotate = 10, min_zoom = 1, max_zoom = 1.1, max_lighting = 0.2, max_warp = 0.2, p_affine = 0.75, p_lighting = 0.75, xtra_tfms = NULL, size = NULL, mode = "bilinear", pad_mode = "reflection", align_corners = TRUE, batch = FALSE, min_scale = 1 )
mult |
ratio |
do_flip |
to do flip |
flip_vert |
flip vertical or not |
max_rotate |
maximum rotation |
min_zoom |
minimum zoom |
max_zoom |
maximum zoom |
max_lighting |
maximum lighting |
max_warp |
maximum warp |
p_affine |
probability affine |
p_lighting |
probability lighting |
xtra_tfms |
extra transformations |
size |
size of image |
mode |
mode |
pad_mode |
padding mode |
align_corners |
align_corners |
batch |
batch size |
min_scale |
minimum scale |
None
## Not run: URLs_PETS() path = 'oxford-iiit-pet' path_img = 'oxford-iiit-pet/images' fnames = get_image_files(path_img) dls = ImageDataLoaders_from_name_re( path, fnames, pat='(.+)_.jpg$', item_tfms=Resize(size = 460), bs = 10, batch_tfms=list(aug_transforms(size = 224, min_scale = 0.75), Normalize_from_stats( imagenet_stats() ) ) ) ## End(Not run)
## Not run: URLs_PETS() path = 'oxford-iiit-pet' path_img = 'oxford-iiit-pet/images' fnames = get_image_files(path_img) dls = ImageDataLoaders_from_name_re( path, fnames, pat='(.+)_.jpg$', item_tfms=Resize(size = 460), bs = 10, batch_tfms=list(aug_transforms(size = 224, min_scale = 0.75), Normalize_from_stats( imagenet_stats() ) ) ) ## End(Not run)
Keeps track of the avg grads of 'p' in 'state' with 'mom'.
average_grad(p, mom, dampening = FALSE, grad_avg = NULL, ...)
average_grad(p, mom, dampening = FALSE, grad_avg = NULL, ...)
p |
p |
mom |
momentum |
dampening |
dampening |
grad_avg |
grad average |
... |
additional args to pass |
None
Average_sqr_grad
average_sqr_grad(p, sqr_mom, dampening = TRUE, sqr_avg = NULL, ...)
average_sqr_grad(p, sqr_mom, dampening = TRUE, sqr_avg = NULL, ...)
p |
p |
sqr_mom |
sqr momentum |
dampening |
dampening |
sqr_avg |
sqr average |
... |
additional args to pass |
None
Flattens input and output, same as nn$AvgLoss
AvgLoss(...)
AvgLoss(...)
... |
parameters to pass |
Loss object
nn$AvgPool layer for 'ndim'
AvgPool(ks = 2, stride = NULL, padding = 0, ndim = 2, ceil_mode = FALSE)
AvgPool(ks = 2, stride = NULL, padding = 0, ndim = 2, ceil_mode = FALSE)
ks |
kernel size |
stride |
the stride of the window. Default value is kernel_size |
padding |
implicit zero padding to be added on both sides |
ndim |
dimension number |
ceil_mode |
when True, will use ceil instead of floor to compute the output shape |
None
Smooth average of the losses (exponentially weighted with 'beta')
AvgSmoothLoss(beta = 0.98)
AvgSmoothLoss(beta = 0.98)
beta |
beta, defaults to 0.98 |
Loss object
AWD-LSTM inspired by https://arxiv.org/abs/1708.02182
AWD_LSTM( vocab_sz, emb_sz, n_hid, n_layers, pad_token = 1, hidden_p = 0.2, input_p = 0.6, embed_p = 0.1, weight_p = 0.5, bidir = FALSE )
AWD_LSTM( vocab_sz, emb_sz, n_hid, n_layers, pad_token = 1, hidden_p = 0.2, input_p = 0.6, embed_p = 0.1, weight_p = 0.5, bidir = FALSE )
vocab_sz |
vocab_sz |
emb_sz |
emb_sz |
n_hid |
n_hid |
n_layers |
n_layers |
pad_token |
pad_token |
hidden_p |
|
input_p |
input_p |
embed_p |
embed_p |
weight_p |
weight_p |
bidir |
bidir |
None
Split a RNN 'model' in groups for differential learning rates.
awd_lstm_clas_split(model)
awd_lstm_clas_split(model)
model |
model |
None
Split a RNN 'model' in groups for differential learning rates.
awd_lstm_lm_split(model)
awd_lstm_lm_split(model)
model |
model |
None
Same as an AWD-LSTM, but using QRNNs instead of LSTMs
AWD_QRNN( vocab_sz, emb_sz, n_hid, n_layers, pad_token = 1, hidden_p = 0.2, input_p = 0.6, embed_p = 0.1, weight_p = 0.5, bidir = FALSE )
AWD_QRNN( vocab_sz, emb_sz, n_hid, n_layers, pad_token = 1, hidden_p = 0.2, input_p = 0.6, embed_p = 0.1, weight_p = 0.5, bidir = FALSE )
vocab_sz |
vocab_sz |
emb_sz |
emb_sz |
n_hid |
n_hid |
n_layers |
n_layers |
pad_token |
pad_token |
hidden_p |
|
input_p |
input_p |
embed_p |
embed_p |
weight_p |
weight_p |
bidir |
bidir |
None
Balanced Accuracy for single-label binary classification problems
BalancedAccuracy(axis = -1, sample_weight = NULL, adjusted = FALSE)
BalancedAccuracy(axis = -1, sample_weight = NULL, adjusted = FALSE)
axis |
axis |
sample_weight |
sample_weight |
adjusted |
adjusted |
None
Flattens input and output, same as nn$BaseLoss
BaseLoss(...)
BaseLoss(...)
... |
parameters to pass |
Loss object
Basic tokenizer that just splits on spaces
BaseTokenizer(split_char = " ")
BaseTokenizer(split_char = " ")
split_char |
separator |
None
A basic critic for images 'n_channels' x 'in_size' x 'in_size'.
basic_critic(in_size, n_channels, ...)
basic_critic(in_size, n_channels, ...)
in_size |
input size |
n_channels |
The number of channels |
... |
additional parameters to pass |
None
## Not run: critic = basic_critic(in_size = 64, n_channels = 3, n_extra_layers = 1, act_cls = partial(nn()$LeakyReLU, negative_slope = 0.2)) ## End(Not run)
## Not run: critic = basic_critic(in_size = 64, n_channels = 3, n_extra_layers = 1, act_cls = partial(nn()$LeakyReLU, negative_slope = 0.2)) ## End(Not run)
A basic generator from 'in_sz' to images 'n_channels' x 'out_size' x 'out_size'.
basic_generator(out_size, n_channels, ...)
basic_generator(out_size, n_channels, ...)
out_size |
out_size |
n_channels |
n_channels |
... |
additional params to pass |
generator object
## Not run: generator = basic_generator(out_size = 64, n_channels = 3, n_extra_layers = 1) ## End(Not run)
## Not run: generator = basic_generator(out_size = 64, n_channels = 3, n_extra_layers = 1) ## End(Not run)
BasicMelSpectrogram
BasicMelSpectrogram( sample_rate = 16000, n_fft = 400, win_length = NULL, hop_length = NULL, f_min = 0, f_max = NULL, pad = 0, n_mels = 128, window_fn = torch()$hann_window, power = 2, normalized = FALSE, wkwargs = NULL, mel = TRUE, to_db = TRUE )
BasicMelSpectrogram( sample_rate = 16000, n_fft = 400, win_length = NULL, hop_length = NULL, f_min = 0, f_max = NULL, pad = 0, n_mels = 128, window_fn = torch()$hann_window, power = 2, normalized = FALSE, wkwargs = NULL, mel = TRUE, to_db = TRUE )
sample_rate |
sample rate |
n_fft |
number of fast fourier transforms |
win_length |
windowing length |
hop_length |
hopping length |
f_min |
minimum frequency |
f_max |
maximum frequency |
pad |
padding |
n_mels |
number of mel-spectrograms |
window_fn |
window function |
power |
power |
normalized |
normalized or not |
wkwargs |
additional arguments |
mel |
mel-spectrogram or not |
to_db |
to decibels |
None
Basic MFCC
BasicMFCC( sample_rate = 16000, n_mfcc = 40, dct_type = 2, norm = "ortho", log_mels = FALSE, melkwargs = NULL )
BasicMFCC( sample_rate = 16000, n_mfcc = 40, dct_type = 2, norm = "ortho", log_mels = FALSE, melkwargs = NULL )
sample_rate |
sample rate |
n_mfcc |
number of mel-frequency cepstral coefficients |
dct_type |
dct type |
norm |
normalization type |
log_mels |
apply log to mels |
melkwargs |
additional arguments for mels |
None
BasicSpectrogram
BasicSpectrogram( n_fft = 400, win_length = NULL, hop_length = NULL, pad = 0, window_fn = torch()$hann_window, power = 2, normalized = FALSE, wkwargs = NULL, mel = FALSE, to_db = TRUE )
BasicSpectrogram( n_fft = 400, win_length = NULL, hop_length = NULL, pad = 0, window_fn = torch()$hann_window, power = 2, normalized = FALSE, wkwargs = NULL, mel = FALSE, to_db = TRUE )
n_fft |
number of fast fourier transforms |
win_length |
windowing length |
hop_length |
hopping length |
pad |
padding mode |
window_fn |
window function |
power |
power |
normalized |
normalized or not |
wkwargs |
additional arguments |
mel |
mel-spectrogram or not |
to_db |
to decibels |
None
BatchNorm layer with 'nf' features and 'ndim' initialized depending on 'norm_type'.
BatchNorm( nf, ndim = 2, norm_type = 1, eps = 1e-05, momentum = 0.1, affine = TRUE, track_running_stats = TRUE )
BatchNorm( nf, ndim = 2, norm_type = 1, eps = 1e-05, momentum = 0.1, affine = TRUE, track_running_stats = TRUE )
nf |
input shape |
ndim |
dimension number |
norm_type |
normalization type |
eps |
epsilon |
momentum |
momentum |
affine |
affine |
track_running_stats |
track running statistics |
None
'nn.BatchNorm1d', but first flattens leading dimensions
BatchNorm1dFlat( num_features, eps = 1e-05, momentum = 0.1, affine = TRUE, track_running_stats = TRUE )
BatchNorm1dFlat( num_features, eps = 1e-05, momentum = 0.1, affine = TRUE, track_running_stats = TRUE )
num_features |
number of features |
eps |
epsilon |
momentum |
momentum |
affine |
affine |
track_running_stats |
track running statistics |
None
Function that collect 'samples' of labelled bboxes and adds padding with 'pad_idx'.
bb_pad(samples, pad_idx = 0)
bb_pad(samples, pad_idx = 0)
samples |
samples |
pad_idx |
pad index |
None
A 'TransformBlock' for bounding boxes in an image
BBoxBlock()
BBoxBlock()
None
Delegates ('__call__','decode','setup') to ('encodes','decodes','setups') if 'split_idx' matches
BBoxLabeler(enc = NULL, dec = NULL, split_idx = NULL, order = NULL)
BBoxLabeler(enc = NULL, dec = NULL, split_idx = NULL, order = NULL)
enc |
encoder |
dec |
decoder |
split_idx |
split by index |
order |
order |
None
A 'TransformBlock' for labeled bounding boxes, potentially with 'vocab'
BBoxLblBlock(vocab = NULL, add_na = TRUE)
BBoxLblBlock(vocab = NULL, add_na = TRUE)
vocab |
vocabulary |
add_na |
add NA |
None'
## Not run: URLs_COCO_TINY() c(images, lbl_bbox) %<-% get_annotations('coco_tiny/train.json') timg = Transform(ImageBW_create) idx = 49 c(coco_fn,bbox) %<-% list(paste('coco_tiny/train',images[[idx]],sep = '/'), lbl_bbox[[idx]]) coco_img = timg(coco_fn) tbbox = LabeledBBox(TensorBBox(bbox[[1]]), bbox[[2]]) coco_bb = function(x) { TensorBBox_create(bbox[[1]]) } coco_lbl = function(x) { bbox[[2]] } coco_dsrc = Datasets(c(rep(coco_fn,10)), list(Image_create(), list(coco_bb), list( coco_lbl, MultiCategorize(add_na = TRUE) ) ), n_inp = 1) coco_tdl = TfmdDL(coco_dsrc, bs = 9, after_item = list(BBoxLabeler(), PointScaler(), ToTensor()), after_batch = list(IntToFloatTensor(), aug_transforms()) ) coco_tdl %>% show_batch(dpi = 200) ## End(Not run)
## Not run: URLs_COCO_TINY() c(images, lbl_bbox) %<-% get_annotations('coco_tiny/train.json') timg = Transform(ImageBW_create) idx = 49 c(coco_fn,bbox) %<-% list(paste('coco_tiny/train',images[[idx]],sep = '/'), lbl_bbox[[idx]]) coco_img = timg(coco_fn) tbbox = LabeledBBox(TensorBBox(bbox[[1]]), bbox[[2]]) coco_bb = function(x) { TensorBBox_create(bbox[[1]]) } coco_lbl = function(x) { bbox[[2]] } coco_dsrc = Datasets(c(rep(coco_fn,10)), list(Image_create(), list(coco_bb), list( coco_lbl, MultiCategorize(add_na = TRUE) ) ), n_inp = 1) coco_tdl = TfmdDL(coco_dsrc, bs = 9, after_item = list(BBoxLabeler(), PointScaler(), ToTensor()), after_batch = list(IntToFloatTensor(), aug_transforms()) ) coco_tdl %>% show_batch(dpi = 200) ## End(Not run)
Flattens input and output, same as nn$BCELoss
BCELossFlat(...)
BCELossFlat(...)
... |
parameters to pass |
Loss object
BCEWithLogitsLossFlat
BCEWithLogitsLossFlat(...)
BCEWithLogitsLossFlat(...)
... |
parameters to pass |
Loss object
Hugging Face module
Blurr module
blurr() blurr()
blurr() blurr()
None
None
Brier score for single-label classification problems
BrierScore(axis = -1, sample_weight = NULL, pos_label = NULL)
BrierScore(axis = -1, sample_weight = NULL, pos_label = NULL)
axis |
axis |
sample_weight |
sample_weight |
pos_label |
pos_label |
None
Brier score for multi-label classification problems
BrierScoreMulti( thresh = 0.5, sigmoid = TRUE, sample_weight = NULL, pos_label = NULL )
BrierScoreMulti( thresh = 0.5, sigmoid = TRUE, sample_weight = NULL, pos_label = NULL )
thresh |
thresh |
sigmoid |
sigmoid |
sample_weight |
sample_weight |
pos_label |
pos_label |
None
Launch a mock training to find a good batch size to minimize training time.
bs_find( object, lr, num_it = NULL, n_batch = 5, simulate_multi_gpus = TRUE, show_plot = TRUE )
bs_find( object, lr, num_it = NULL, n_batch = 5, simulate_multi_gpus = TRUE, show_plot = TRUE )
object |
model/learner |
lr |
learning rate |
num_it |
number of iterations |
n_batch |
number of batches |
simulate_multi_gpus |
simulate on multi gpus or not |
show_plot |
show plot or not |
However, it may not be a good batch size to minimize the validation loss. A good batch size is where the Simple Noise Scale converge ignoring the small growing trend with the number of iterations if exists. The optimal batch size is about an order the magnitud where Simple Noise scale converge. Typically, the optimal batch size in image classification problems will be 2-3 times lower where
Calculate_rouge
calculate_rouge( predicted_txts, reference_txts, rouge_keys = c("rouge1", "rouge2", "rougeL"), use_stemmer = TRUE )
calculate_rouge( predicted_txts, reference_txts, rouge_keys = c("rouge1", "rouge2", "rougeL"), use_stemmer = TRUE )
predicted_txts |
predicted texts |
reference_txts |
reference texts |
rouge_keys |
rouge keys |
use_stemmer |
use stemmer or not |
None
Concatenate layers outputs over a given dim
Cat(layers, dim = 1)
Cat(layers, dim = 1)
layers |
layers |
dim |
dimension size |
None
Transform the categorical variables to that type.
Categorify(cat_names, cont_names)
Categorify(cat_names, cont_names)
cat_names |
The names of the categorical variables |
cont_names |
The names of the continuous variables |
None
'TransformBlock' for single-label categorical targets
CategoryBlock(vocab = NULL, sort = TRUE, add_na = FALSE)
CategoryBlock(vocab = NULL, sort = TRUE, add_na = FALSE)
vocab |
vocabulary |
sort |
sort or not |
add_na |
add NA |
Block object
Ceil
## S3 method for class 'torch.Tensor' ceiling(x)
## S3 method for class 'torch.Tensor' ceiling(x)
x |
tensor |
tensor
Ceil
## S3 method for class 'fastai.torch_core.TensorMask' ceiling(x)
## S3 method for class 'fastai.torch_core.TensorMask' ceiling(x)
x |
tensor |
tensor
Changes the volume of the signal
ChangeVolume(p = 0.5, lower = 0.5, upper = 1.5)
ChangeVolume(p = 0.5, lower = 0.5, upper = 1.5)
p |
probability |
lower |
lower bound |
upper |
upper bound |
None
Return the children of 'm' and its direct parameters not registered in modules.
children_and_parameters(m)
children_and_parameters(m)
m |
parameters |
None
Construct interpretation object from a learner
ClassificationInterpretation_from_learner( learn, ds_idx = 1, dl = NULL, act = NULL )
ClassificationInterpretation_from_learner( learn, ds_idx = 1, dl = NULL, act = NULL )
learn |
learner/model |
ds_idx |
ds by index |
dl |
dataloader |
act |
activation |
interpretation object
Clean_raw_keys
clean_raw_keys(wgts)
clean_raw_keys(wgts)
wgts |
wgts |
None
Clip bounding boxes with image border and label background the empty ones
clip_remove_empty(bbox, label)
clip_remove_empty(bbox, label)
bbox |
bbox |
label |
label |
None
Convenience function to easily create a config for 'create_cnn_model'
cnn_config( cut = NULL, pretrained = TRUE, n_in = 3, init = nn()$init$kaiming_normal_, custom_head = NULL, concat_pool = TRUE, lin_ftrs = NULL, ps = 0.5, bn_final = FALSE, lin_first = FALSE, y_range = NULL )
cnn_config( cut = NULL, pretrained = TRUE, n_in = 3, init = nn()$init$kaiming_normal_, custom_head = NULL, concat_pool = TRUE, lin_ftrs = NULL, ps = 0.5, bn_final = FALSE, lin_first = FALSE, y_range = NULL )
cut |
cut |
pretrained |
pre-trained or not |
n_in |
input shape |
init |
initializer |
custom_head |
custom head |
concat_pool |
concatenate pooling |
lin_ftrs |
linear filters |
ps |
parameter server |
bn_final |
batch normalization final |
lin_first |
linear first |
y_range |
y_range |
None
Build a convnet style learner from 'dls' and 'arch'
cnn_learner( dls, arch, loss_func = NULL, pretrained = TRUE, cut = NULL, splitter = NULL, y_range = NULL, config = NULL, n_out = NULL, normalize = TRUE, opt_func = Adam(), lr = 0.001, cbs = NULL, metrics = NULL, path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE, moms = list(0.95, 0.85, 0.95) )
cnn_learner( dls, arch, loss_func = NULL, pretrained = TRUE, cut = NULL, splitter = NULL, y_range = NULL, config = NULL, n_out = NULL, normalize = TRUE, opt_func = Adam(), lr = 0.001, cbs = NULL, metrics = NULL, path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE, moms = list(0.95, 0.85, 0.95) )
dls |
data loader object |
arch |
a model architecture |
loss_func |
loss function |
pretrained |
pre-trained or not |
cut |
cut |
splitter |
It is a function that takes self.model and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). |
y_range |
y_range |
config |
configuration |
n_out |
the number of out |
normalize |
normalize |
opt_func |
The function used to create the optimizer |
lr |
learning rate |
cbs |
Cbs is one or a list of Callbacks to pass to the Learner. |
metrics |
It is an optional list of metrics, that can be either functions or Metrics. |
path |
The folder where to work |
model_dir |
Path and model_dir are used to save and/or load models. |
wd |
It is the default weight decay used when training the model. |
wd_bn_bias |
It controls if weight decay is applied to BatchNorm layers and bias. |
train_bn |
It controls if BatchNorm layers are trained even when they are supposed to be frozen according to the splitter. |
moms |
The default momentums used in Learner.fit_one_cycle. |
learner object
## Not run: URLs_MNIST_SAMPLE() # transformations tfms = aug_transforms(do_flip = FALSE) path = 'mnist_sample' bs = 20 #load into memory data = ImageDataLoaders_from_folder(path, batch_tfms = tfms, size = 26, bs = bs) learn = cnn_learner(data, resnet18(), metrics = accuracy, path = getwd()) ## End(Not run)
## Not run: URLs_MNIST_SAMPLE() # transformations tfms = aug_transforms(do_flip = FALSE) path = 'mnist_sample' bs = 20 #load into memory data = ImageDataLoaders_from_folder(path, batch_tfms = tfms, size = 26, bs = bs) learn = cnn_learner(data, resnet18(), metrics = accuracy, path = getwd()) ## End(Not run)
Wrapper around [cocoapi evaluator](https://github.com/cocodataset/cocoapi)
COCOMetric( metric_type = COCOMetricType()$bbox, print_summary = FALSE, show_pbar = FALSE )
COCOMetric( metric_type = COCOMetricType()$bbox, print_summary = FALSE, show_pbar = FALSE )
metric_type |
Dependent on the task you're solving. |
print_summary |
If 'TRUE', prints a table with statistics. |
show_pbar |
If 'TRUE' shows pbar when preparing the data for evaluation. |
Calculates average precision. # Arguments metric_type: Dependent on the task you're solving. print_summary: If 'TRUE', prints a table with statistics. show_pbar: If 'TRUE' shows pbar when preparing the data for evaluation.
None
Available options for 'COCOMetric'
COCOMetricType()
COCOMetricType()
None
Cohen kappa for single-label classification problems
CohenKappa(axis = -1, labels = NULL, weights = NULL, sample_weight = NULL)
CohenKappa(axis = -1, labels = NULL, weights = NULL, sample_weight = NULL)
axis |
axis |
labels |
labels |
weights |
weights |
sample_weight |
sample_weight |
None
Create a Learner for collaborative filtering on 'dls'.
collab_learner( dls, n_factors = 50, use_nn = FALSE, emb_szs = NULL, layers = NULL, config = NULL, y_range = NULL, loss_func = NULL, opt_func = Adam(), lr = 0.001, splitter = trainable_params(), cbs = NULL, metrics = NULL, path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE, moms = list(0.95, 0.85, 0.95) )
collab_learner( dls, n_factors = 50, use_nn = FALSE, emb_szs = NULL, layers = NULL, config = NULL, y_range = NULL, loss_func = NULL, opt_func = Adam(), lr = 0.001, splitter = trainable_params(), cbs = NULL, metrics = NULL, path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE, moms = list(0.95, 0.85, 0.95) )
dls |
a data loader object |
n_factors |
The number of factors |
use_nn |
use_nn |
emb_szs |
embedding size |
layers |
list of layers |
config |
configuration |
y_range |
y_range |
loss_func |
It can be any loss function you like. It needs to be one of fastai's if you want to use Learn.predict or Learn.get_preds, or you will have to implement special methods (see more details after the BaseLoss documentation). |
opt_func |
The function used to create the optimizer |
lr |
learning rate |
splitter |
It is a function that takes self.model and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). |
cbs |
Cbs is one or a list of Callbacks to pass to the Learner. |
metrics |
It is an optional list of metrics, that can be either functions or Metrics. |
path |
The folder where to work |
model_dir |
Path and model_dir are used to save and/or load models. |
wd |
It is the default weight decay used when training the model. |
wd_bn_bias |
It controls if weight decay is applied to BatchNorm layers and bias. |
train_bn |
It controls if BatchNorm layers are trained even when they are supposed to be frozen according to the splitter. |
moms |
The default momentums used in Learner.fit_one_cycle. |
learner object
## Not run: URLs_MOVIE_LENS_ML_100k() c(user,item,title) %<-% list('userId','movieId','title') ratings = fread('ml-100k/u.data', col.names = c(user,item,'rating','timestamp')) movies = fread('ml-100k/u.item', col.names = c(item, 'title', 'date', 'N', 'url', paste('g',1:19,sep = ''))) rating_movie = ratings[movies[, .SD, .SDcols=c(item,title)], on = item] dls = CollabDataLoaders_from_df(rating_movie, seed = 42, valid_pct = 0.1, bs = 64, item_name=title, path='ml-100k') learn = collab_learner(dls, n_factors = 40, y_range=c(0, 5.5)) learn %>% fit_one_cycle(1, 5e-3, wd = 1e-1) ## End(Not run)
## Not run: URLs_MOVIE_LENS_ML_100k() c(user,item,title) %<-% list('userId','movieId','title') ratings = fread('ml-100k/u.data', col.names = c(user,item,'rating','timestamp')) movies = fread('ml-100k/u.item', col.names = c(item, 'title', 'date', 'N', 'url', paste('g',1:19,sep = ''))) rating_movie = ratings[movies[, .SD, .SDcols=c(item,title)], on = item] dls = CollabDataLoaders_from_df(rating_movie, seed = 42, valid_pct = 0.1, bs = 64, item_name=title, path='ml-100k') learn = collab_learner(dls, n_factors = 40, y_range=c(0, 5.5)) learn %>% fit_one_cycle(1, 5e-3, wd = 1e-1) ## End(Not run)
Create a dataloaders from a given 'dblock'
CollabDataLoaders_from_dblock( dblock, source, path = ".", bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL )
CollabDataLoaders_from_dblock( dblock, source, path = ".", bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL )
dblock |
dblock |
source |
source |
path |
The folder where to work |
bs |
The batch size |
val_bs |
The batch size for the validation DataLoader (defaults to bs) |
shuffle_train |
If we shuffle the training DataLoader or not |
device |
device |
None
Create a 'DataLoaders' suitable for collaborative filtering from 'ratings'.
CollabDataLoaders_from_df( ratings, valid_pct = 0.2, user_name = NULL, item_name = NULL, rating_name = NULL, seed = NULL, path = ".", bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL )
CollabDataLoaders_from_df( ratings, valid_pct = 0.2, user_name = NULL, item_name = NULL, rating_name = NULL, seed = NULL, path = ".", bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL )
ratings |
ratings |
valid_pct |
The random percentage of the dataset to set aside for validation (with an optional seed) |
user_name |
The name of the column containing the user (defaults to the first column) |
item_name |
The name of the column containing the item (defaults to the second column) |
rating_name |
The name of the column containing the rating (defaults to the third column) |
seed |
random seed |
path |
The folder where to work |
bs |
The batch size |
val_bs |
The batch size for the validation DataLoader (defaults to bs) |
shuffle_train |
If we shuffle the training DataLoader or not |
device |
the device, e.g. cpu, cuda, and etc. |
None
## Not run: URLs_MOVIE_LENS_ML_100k() c(user,item,title) %<-% list('userId','movieId','title') ratings = fread('ml-100k/u.data', col.names = c(user,item,'rating','timestamp')) movies = fread('ml-100k/u.item', col.names = c(item, 'title', 'date', 'N', 'url', paste('g',1:19,sep = ''))) rating_movie = ratings[movies[, .SD, .SDcols=c(item,title)], on = item] dls = CollabDataLoaders_from_df(rating_movie, seed = 42, valid_pct = 0.1, bs = 64, item_name=title, path='ml-100k') ## End(Not run)
## Not run: URLs_MOVIE_LENS_ML_100k() c(user,item,title) %<-% list('userId','movieId','title') ratings = fread('ml-100k/u.data', col.names = c(user,item,'rating','timestamp')) movies = fread('ml-100k/u.item', col.names = c(item, 'title', 'date', 'N', 'url', paste('g',1:19,sep = ''))) rating_movie = ratings[movies[, .SD, .SDcols=c(item,title)], on = item] dls = CollabDataLoaders_from_df(rating_movie, seed = 42, valid_pct = 0.1, bs = 64, item_name=title, path='ml-100k') ## End(Not run)
Collect all batches, along with pred and loss, into self.data. Mainly for testing
CollectDataCallback(...) CollectDataCallback(...)
CollectDataCallback(...) CollectDataCallback(...)
... |
arguments to pass |
None
None
Read 'cols' in 'row' with potential 'pref' and 'suff'
ColReader(cols, pref = "", suff = "", label_delim = NULL)
ColReader(cols, pref = "", suff = "", label_delim = NULL)
cols |
columns |
pref |
pref |
suff |
suffix |
label_delim |
label separator |
None
Split 'items' (supposed to be a dataframe) by value in 'col'
ColSplitter(col = "is_valid")
ColSplitter(col = "is_valid")
col |
column |
None
Create a schedule with constant learning rate 'start_lr' for 'pct' proportion of the training, and a 'curve_type' learning rate (till 'end_lr') for remaining portion of training.
combined_flat_anneal(pct, start_lr, end_lr = 0, curve_type = "linear")
combined_flat_anneal(pct, start_lr, end_lr = 0, curve_type = "linear")
pct |
Proportion of training with a constant learning rate. |
start_lr |
Desired starting learning rate, used for beginnning pct of training. |
end_lr |
Desired end learning rate, training will conclude at this learning rate. |
curve_type |
Curve type for learning rate annealing. Options are 'linear', 'cosine', and 'exponential'. |
download a competition file to a designated location, or use
competition_download_file( competition, file_name, path = NULL, force = FALSE, quiet = FALSE )
competition_download_file( competition, file_name, path = NULL, force = FALSE, quiet = FALSE )
competition |
the name of the competition |
file_name |
the configuration file name |
path |
a path to download the file to |
force |
force the download if the file already exists (default FALSE) |
quiet |
suppress verbose output (default is FALSE) |
None
## Not run: com_nm = 'titanic' titanic_files = competition_list_files(com_nm) titanic_files = lapply(1:length(titanic_files), function(x) as.character(titanic_files[[x]])) str(titanic_files) if(!dir.exists(com_nm)) { dir.create(com_nm) } # download via api competition_download_files(competition = com_nm, path = com_nm, unzip = TRUE) ## End(Not run)
## Not run: com_nm = 'titanic' titanic_files = competition_list_files(com_nm) titanic_files = lapply(1:length(titanic_files), function(x) as.character(titanic_files[[x]])) str(titanic_files) if(!dir.exists(com_nm)) { dir.create(com_nm) } # download via api competition_download_files(competition = com_nm, path = com_nm, unzip = TRUE) ## End(Not run)
Competition download files
competition_download_files( competition, path = NULL, force = FALSE, quiet = FALSE, unzip = FALSE )
competition_download_files( competition, path = NULL, force = FALSE, quiet = FALSE, unzip = FALSE )
competition |
the name of the competition |
path |
a path to download the file to |
force |
force the download if the file already exists (default FALSE) |
quiet |
suppress verbose output (default is TRUE) |
unzip |
unzip downloaded files |
None
Download competition leaderboards
competition_leaderboard_download(competition, path, quiet = TRUE)
competition_leaderboard_download(competition, path, quiet = TRUE)
competition |
the name of the competition |
path |
a path to download the file to |
quiet |
suppress verbose output (default is TRUE) |
data frame
list files for competition
competition_list_files(competition)
competition_list_files(competition)
competition |
the name of the competition |
list of files
## Not run: com_nm = 'titanic' titanic_files = competition_list_files(com_nm) ## End(Not run)
## Not run: com_nm = 'titanic' titanic_files = competition_list_files(com_nm) ## End(Not run)
Competition submit
competition_submit(file_name, message, competition, quiet = FALSE)
competition_submit(file_name, message, competition, quiet = FALSE)
file_name |
the competition metadata file |
message |
the submission description |
competition |
the competition name |
quiet |
suppress verbose output (default is FALSE) |
None
Competitions list
competitions_list( group = NULL, category = NULL, sort_by = NULL, page = 1, search = NULL )
competitions_list( group = NULL, category = NULL, sort_by = NULL, page = 1, search = NULL )
group |
group to filter result to |
category |
category to filter result to |
sort_by |
how to sort the result, see valid_competition_sort_by for options |
page |
the page to return (default is 1) |
search |
a search term to use (default is empty string) |
list of competitions
Apply change in contrast of 'max_lighting' to batch of images with probability 'p'.
Contrast(max_lighting = 0.2, p = 0.75, draw = NULL, batch = FALSE)
Contrast(max_lighting = 0.2, p = 0.75, draw = NULL, batch = FALSE)
max_lighting |
maximum lighting |
p |
probability |
draw |
draw |
batch |
batch |
None
Conv_norm_lr
conv_norm_lr( ch_in, ch_out, norm_layer = NULL, ks = 3, bias = TRUE, pad = 1, stride = 1, activ = TRUE, slope = 0.2, init = nn()$init$normal_, init_gain = 0.02 )
conv_norm_lr( ch_in, ch_out, norm_layer = NULL, ks = 3, bias = TRUE, pad = 1, stride = 1, activ = TRUE, slope = 0.2, init = nn()$init$normal_, init_gain = 0.02 )
ch_in |
input |
ch_out |
output |
norm_layer |
normalziation layer |
ks |
kernel size |
bias |
bias |
pad |
pad |
stride |
stride |
activ |
activation |
slope |
slope |
init |
inititializer |
init_gain |
initializer gain |
None
Create a sequence of convolutional ('ni' to 'nf'), ReLU (if 'use_activ') and 'norm_type' layers.
ConvLayer( ni, nf, ks = 3, stride = 1, padding = NULL, bias = NULL, ndim = 2, norm_type = 1, bn_1st = TRUE, act_cls = nn()$ReLU, transpose = FALSE, init = "auto", xtra = NULL, bias_std = 0.01, dilation = 1, groups = 1, padding_mode = "zeros" )
ConvLayer( ni, nf, ks = 3, stride = 1, padding = NULL, bias = NULL, ndim = 2, norm_type = 1, bn_1st = TRUE, act_cls = nn()$ReLU, transpose = FALSE, init = "auto", xtra = NULL, bias_std = 0.01, dilation = 1, groups = 1, padding_mode = "zeros" )
ni |
number of inputs |
nf |
outputs/ number of features |
ks |
kernel size |
stride |
stride |
padding |
padding |
bias |
bias |
ndim |
dimension number |
norm_type |
normalization type |
bn_1st |
batch normalization 1st |
act_cls |
activation |
transpose |
transpose |
init |
initializer |
xtra |
xtra |
bias_std |
bias standard deviation |
dilation |
specify the dilation rate to use for dilated convolution |
groups |
groups size |
padding_mode |
padding mode, e.g 'zeros' |
None
ConvT_norm_relu
convT_norm_relu(ch_in, ch_out, norm_layer, ks = 3, stride = 2, bias = TRUE)
convT_norm_relu(ch_in, ch_out, norm_layer, ks = 3, stride = 2, bias = TRUE)
ch_in |
input |
ch_out |
output |
norm_layer |
normalziation layer |
ks |
kernel size |
stride |
stride size |
bias |
bias true or not |
None
Blueprint for defining a metric
CorpusBLEUMetric(vocab_sz = 5000, axis = -1)
CorpusBLEUMetric(vocab_sz = 5000, axis = -1)
vocab_sz |
vocab_sz |
axis |
axis |
None
Cos
## S3 method for class 'torch.Tensor' cos(x)
## S3 method for class 'torch.Tensor' cos(x)
x |
tensor |
tensor
Cos
## S3 method for class 'fastai.torch_core.TensorMask' cos(x)
## S3 method for class 'fastai.torch_core.TensorMask' cos(x)
x |
tensor |
tensor
Cosh
## S3 method for class 'torch.Tensor' cosh(x)
## S3 method for class 'torch.Tensor' cosh(x)
x |
tensor |
tensor
Cosh
## S3 method for class 'fastai.torch_core.TensorMask' cosh(x)
## S3 method for class 'fastai.torch_core.TensorMask' cosh(x)
x |
tensor |
tensor
Crappifier
crappifier(path_lr, path_hr)
crappifier(path_lr, path_hr)
path_lr |
path from (origin) |
path_hr |
path to (destination) |
None
## Not run: items = get_image_files(path_hr) parallel(crappifier(path_lr, path_hr), items) ## End(Not run)
## Not run: items = get_image_files(path_hr) parallel(crappifier(path_lr, path_hr), items) ## End(Not run)
Cut off the body of a typically pretrained 'arch' as determined by 'cut'
create_body(...)
create_body(...)
... |
parameters to pass |
None
## Not run: encoder = create_body(resnet34(), pretrained = TRUE) ## End(Not run)
## Not run: encoder = create_body(resnet34(), pretrained = TRUE) ## End(Not run)
Create custom convnet architecture using 'arch', 'n_in' and 'n_out'
create_cnn_model( arch, n_out, cut = NULL, pretrained = TRUE, n_in = 3, init = nn()$init$kaiming_normal_, custom_head = NULL, concat_pool = TRUE, lin_ftrs = NULL, ps = 0.5, bn_final = FALSE, lin_first = FALSE, y_range = NULL )
create_cnn_model( arch, n_out, cut = NULL, pretrained = TRUE, n_in = 3, init = nn()$init$kaiming_normal_, custom_head = NULL, concat_pool = TRUE, lin_ftrs = NULL, ps = 0.5, bn_final = FALSE, lin_first = FALSE, y_range = NULL )
arch |
a model architecture |
n_out |
number of outs |
cut |
cut |
pretrained |
pretrained model or not |
n_in |
input shape |
init |
initializer |
custom_head |
custom head |
concat_pool |
concatenate pooling |
lin_ftrs |
linear fiters |
ps |
parameter server |
bn_final |
batch normalization final |
lin_first |
linear first |
y_range |
y_range |
None
A bunch of convolutions stacked together.
create_fcn(ni, nout, ks = 9, conv_sizes = c(128, 256, 128), stride = 1)
create_fcn(ni, nout, ks = 9, conv_sizes = c(128, 256, 128), stride = 1)
ni |
number of input channels |
nout |
output shape |
ks |
kernel size |
conv_sizes |
convolution sizes |
stride |
stride |
model
Model head that takes 'nf' features, runs through 'lin_ftrs', and out 'n_out' classes.
create_head( nf, n_out, lin_ftrs = NULL, ps = 0.5, concat_pool = TRUE, bn_final = FALSE, lin_first = FALSE, y_range = NULL )
create_head( nf, n_out, lin_ftrs = NULL, ps = 0.5, concat_pool = TRUE, bn_final = FALSE, lin_first = FALSE, y_range = NULL )
nf |
number of features |
n_out |
number of out features |
lin_ftrs |
linear features |
ps |
parameter server |
concat_pool |
concatenate pooling |
bn_final |
batch normalization final |
lin_first |
linear first |
y_range |
y_range |
None
Creates an InceptionTime arch from 'ni' channels to 'nout' outputs.
create_inception( ni, nout, kss = c(39, 19, 9), depth = 6, bottleneck_size = 32, nb_filters = 32, head = TRUE )
create_inception( ni, nout, kss = c(39, 19, 9), depth = 6, bottleneck_size = 32, nb_filters = 32, head = TRUE )
ni |
number of input channels |
nout |
number of outputs, should be equal to the number of classes for classification tasks. |
kss |
kernel sizes for the inception Block. |
depth |
depth |
bottleneck_size |
The number of channels on the convolution bottleneck. |
nb_filters |
Channels on the convolution of each kernel. |
head |
TRUE if we want a head attached. |
model
A simple model builder to create a bunch of BatchNorm1d, Dropout and Linear layers, with “'act_fn“' activations.
create_mlp(ni, nout, linear_sizes = c(500, 500, 500))
create_mlp(ni, nout, linear_sizes = c(500, 500, 500))
ni |
number of input channels |
nout |
output shape |
linear_sizes |
linear output sizes |
model
Basic 11 Layer - 1D resnet builder
create_resnet( ni, nout, kss = c(9, 5, 3), conv_sizes = c(64, 128, 128), stride = 1 )
create_resnet( ni, nout, kss = c(9, 5, 3), conv_sizes = c(64, 128, 128), stride = 1 )
ni |
number of input channels |
nout |
output shape |
kss |
kernel size |
conv_sizes |
convolution sizes |
stride |
stride |
model
Create custom unet architecture
create_unet_model( arch, n_out, img_size, pretrained = TRUE, cut = NULL, n_in = 3, blur = FALSE, blur_final = TRUE, self_attention = FALSE, y_range = NULL, last_cross = TRUE, bottle = FALSE, act_cls = nn()$ReLU, init = nn()$init$kaiming_normal_, norm_type = NULL )
create_unet_model( arch, n_out, img_size, pretrained = TRUE, cut = NULL, n_in = 3, blur = FALSE, blur_final = TRUE, self_attention = FALSE, y_range = NULL, last_cross = TRUE, bottle = FALSE, act_cls = nn()$ReLU, init = nn()$init$kaiming_normal_, norm_type = NULL )
arch |
architecture |
n_out |
number of out features |
img_size |
imgage shape |
pretrained |
pretrained or not |
cut |
cut |
n_in |
number of input |
blur |
blur is used to avoid checkerboard artifacts at each layer. |
blur_final |
blur final is specific to the last layer. |
self_attention |
self_attention determines if we use a self attention layer at the third block before the end. |
y_range |
If y_range is passed, the last activations go through a sigmoid rescaled to that range. |
last_cross |
last_cross |
bottle |
bottle |
act_cls |
activation |
init |
initialzier |
norm_type |
normalization type |
None
Center crop or pad an image to 'size'
CropPad(size, pad_mode = "zeros", ...)
CropPad(size, pad_mode = "zeros", ...)
size |
size |
pad_mode |
padding mode |
... |
additional arguments |
None
Random crops full spectrogram to be length specified in ms by crop_duration
CropTime(duration, pad_mode = AudioPadType()$Zeros)
CropTime(duration, pad_mode = AudioPadType()$Zeros)
duration |
int, duration |
pad_mode |
padding mode, by default 'AudioPadType$Zeros' |
None
Same as 'nn$Module', but no need for subclasses to call 'super().__init__'
CrossEntropyLossFlat(...)
CrossEntropyLossFlat(...)
... |
parameters to pass |
Loss object
Basic class handling tweaks of the training loop by changing a 'Learner' in various events
CSVLogger(fname = "history.csv", append = FALSE)
CSVLogger(fname = "history.csv", append = FALSE)
fname |
file name |
append |
append or not |
None
## Not run: URLs_MNIST_SAMPLE() # transformations tfms = aug_transforms(do_flip = FALSE) path = 'mnist_sample' bs = 20 #load into memory data = ImageDataLoaders_from_folder(path, batch_tfms = tfms, size = 26, bs = bs) learn = cnn_learner(data, resnet18(), metrics = accuracy, path = getwd()) learn %>% fit_one_cycle(2, cbs = CSVLogger()) ## End(Not run)
## Not run: URLs_MNIST_SAMPLE() # transformations tfms = aug_transforms(do_flip = FALSE) path = 'mnist_sample' bs = 20 #load into memory data = ImageDataLoaders_from_folder(path, batch_tfms = tfms, size = 26, bs = bs) learn = cnn_learner(data, resnet18(), metrics = accuracy, path = getwd()) learn %>% fit_one_cycle(2, cbs = CSVLogger()) ## End(Not run)
Move data to CUDA device
CudaCallback(device = NULL)
CudaCallback(device = NULL)
device |
device name |
None
Implementation of 'https://arxiv.org/abs/1905.04899'
CutMix(alpha = 1)
CutMix(alpha = 1)
alpha |
alpha |
None
Replace all 'areas' in 'x' with N(0,1) noise
cutout_gaussian(x, areas)
cutout_gaussian(x, areas)
x |
tensor |
areas |
areas |
None
Initialize and return a 'Learner' object with the data in 'dls', CycleGAN model 'm', optimizer function 'opt_func', metrics 'metrics',
cycle_learner( dls, m, opt_func = Adam(), show_imgs = TRUE, imgA = TRUE, imgB = TRUE, show_img_interval = 10, ... )
cycle_learner( dls, m, opt_func = Adam(), show_imgs = TRUE, imgA = TRUE, imgB = TRUE, show_img_interval = 10, ... )
dls |
dataloader |
m |
CycleGAN model |
opt_func |
optimizer |
show_imgs |
show images |
imgA |
image a (from) |
imgB |
image B (to) |
show_img_interval |
show images interval rafe |
... |
additional arguments |
and callbacks 'cbs'. Additionally, if 'show_imgs' is TRUE, it will show intermediate predictions during training. It will show domain B-to-A predictions if 'imgA' is TRUE and/or domain A-to-B predictions if 'imgB' is TRUE. Additionally, it will show images every 'show_img_interval' epochs. ' Other 'Learner' arguments can be passed as well.
None
CycleGAN model.
CycleGAN( ch_in = 3, ch_out = 3, n_features = 64, disc_layers = 3, gen_blocks = 9, lsgan = TRUE, drop = 0, norm_layer = NULL )
CycleGAN( ch_in = 3, ch_out = 3, n_features = 64, disc_layers = 3, gen_blocks = 9, lsgan = TRUE, drop = 0, norm_layer = NULL )
ch_in |
input |
ch_out |
output |
n_features |
number of features |
disc_layers |
discriminator layers |
gen_blocks |
generator blocks |
lsgan |
ls gan |
drop |
dropout rate |
norm_layer |
normalziation layer |
When called, takes in input batch of real images from both domains and outputs fake images for the opposite domains (with the generators). Also outputs identity images after passing the images into generators that outputs its domain type (needed for identity loss). Attributes: 'G_A' ('nn.Module'): takes real input B and generates fake input A 'G_B' ('nn.Module'): takes real input A and generates fake input B 'D_A' ('nn.Module'): trained to make the difference between real input A and fake input A 'D_B' ('nn.Module'): trained to make the difference between real input B and fake input B
None
CycleGAN loss function. The individual loss terms are also atrributes of this class that are accessed by fastai for recording during training.
CycleGANLoss(cgan, l_A = 10, l_B = 10, l_idt = 0.5, lsgan = TRUE)
CycleGANLoss(cgan, l_A = 10, l_B = 10, l_idt = 0.5, lsgan = TRUE)
cgan |
The CycleGAN model. |
l_A |
lambda_A, weight of domain A losses. (default=10) |
l_B |
lambda_B, weight of domain B losses. (default=10) |
l_idt |
lambda_idt, weight of identity lossees. (default=0.5) |
lsgan |
Whether or not to use LSGAN objective (default=True) |
Attributes: 'self.cgan' ('nn.Module'): The CycleGAN model. 'self.l_A' ('float'): lambda_A, weight of domain A losses. 'self.l_B' ('float'): lambda_B, weight of domain B losses. 'self.l_idt' ('float'): lambda_idt, weight of identity lossees. 'self.crit' ('AdaptiveLoss'): The adversarial loss function (either a BCE or MSE loss depending on 'lsgan' argument) 'self.real_A' and 'self.real_B' ('fastai.torch_core.TensorImage'): Real images from domain A and B. 'self.id_loss_A' ('torch.FloatTensor'): The identity loss for domain A calculated in the forward function 'self.id_loss_B' ('torch.FloatTensor'): The identity loss for domain B calculated in the forward function 'self.gen_loss' ('torch.FloatTensor'): The generator loss calculated in the forward function 'self.cyc_loss' ('torch.FloatTensor'): The cyclic loss calculated in the forward function
Learner Callback for training a CycleGAN model.
CycleGANTrainer(...)
CycleGANTrainer(...)
... |
parameters to pass |
None
Data Loaders
Data_Loaders(...)
Data_Loaders(...)
... |
parameters to pass |
loader object
## Not run: data = Data_Loaders(train_loader, test_loader) learn = Learner(data, Net(), loss_func = F$nll_loss, opt_func = Adam(), metrics = accuracy, cbs = CudaCallback()) learn %>% fit_one_cycle(1, 1e-2) ## End(Not run)
## Not run: data = Data_Loaders(train_loader, test_loader) learn = Learner(data, Net(), loss_func = F$nll_loss, opt_func = Adam(), metrics = accuracy, cbs = CudaCallback()) learn %>% fit_one_cycle(1, 1e-2) ## End(Not run)
Generic container to quickly build 'Datasets' and 'DataLoaders'
DataBlock( blocks = NULL, dl_type = NULL, getters = NULL, n_inp = NULL, item_tfms = NULL, batch_tfms = NULL, ... )
DataBlock( blocks = NULL, dl_type = NULL, getters = NULL, n_inp = NULL, item_tfms = NULL, batch_tfms = NULL, ... )
blocks |
input blocks |
dl_type |
DL application |
getters |
how to get dataet |
n_inp |
n_inp is the number of elements in the tuples that should be considered part of the input and will default to 1 if tfms consists of one set of transforms |
item_tfms |
One or several transforms applied to the items before batching them |
batch_tfms |
One or several transforms applied to the batches once they are formed |
... |
additional parameters to pass |
Block object
Create a 'DataLoaders' object from 'source'
dataloaders(object, ...)
dataloaders(object, ...)
object |
model |
... |
additional parameters to pass |
## Not run: dls = TabularDataTable(df, procs, cat_names, cont_names, y_names = dep_var, splits = list(tr_idx, ts_idx) ) %>% dataloaders(bs = 50) ## End(Not run)
## Not run: dls = TabularDataTable(df, procs, cat_names, cont_names, y_names = dep_var, splits = list(tr_idx, ts_idx) ) %>% dataloaders(bs = 50) ## End(Not run)
A dataset that creates a list from each 'tfms', passed thru 'item_tfms'
Datasets( items = NULL, tfms = NULL, tls = NULL, n_inp = NULL, dl_type = NULL, use_list = NULL, do_setup = TRUE, split_idx = NULL, train_setup = TRUE, splits = NULL, types = NULL, verbose = FALSE )
Datasets( items = NULL, tfms = NULL, tls = NULL, n_inp = NULL, dl_type = NULL, use_list = NULL, do_setup = TRUE, split_idx = NULL, train_setup = TRUE, splits = NULL, types = NULL, verbose = FALSE )
items |
items |
tfms |
transformations |
tls |
tls |
n_inp |
n_inp |
dl_type |
DL type |
use_list |
use list |
do_setup |
do setup |
split_idx |
split by index |
train_setup |
train setup |
splits |
splits |
types |
types |
verbose |
verbose |
None
Open a 'DICOM' file
dcmread(fn, force = FALSE)
dcmread(fn, force = FALSE)
fn |
file name |
force |
logical, force |
dicom object
## Not run: img = dcmread('hemorrhage.dcm') ## End(Not run)
## Not run: img = dcmread('hemorrhage.dcm') ## End(Not run)
Debias
debias(mom, damp, step)
debias(mom, damp, step)
mom |
mom |
damp |
damp |
step |
step |
None
A module to debug inside a model
Debugger(...)
Debugger(...)
... |
parameters to pass |
None
Visualizes a model's decisions using cumulative SHAP values.
decision_plot(object, class_id = 0, row_idx = -1, dpi = 200, ...)
decision_plot(object, class_id = 0, row_idx = -1, dpi = 200, ...)
object |
ShapInterpretation object |
class_id |
is used to indicate the class of interest for a classification model. It can either be an int or str representation for a class of choice. Each colored line in the plot represents the model's prediction for a single observation. |
row_idx |
If no index is passed in to use from the data, it will default to the first ten samples on the test set. Note:plotting too many samples at once can make the plot illegible. |
dpi |
dots per inch |
... |
additional arguments |
None
Decode the special tokens in 'tokens'
decode_spec_tokens(tokens)
decode_spec_tokens(tokens)
tokens |
tokens |
None
Default split of a model between body and head
default_split(m)
default_split(m)
m |
parameters |
None
Creates delta with order 1 and 2 from spectrogram and concatenate with the original
Delta(width = 9)
Delta(width = 9)
width |
int, width |
None
Denormalize_imagenet
denormalize_imagenet(img)
denormalize_imagenet(img)
img |
img |
None
Densenet121
densenet121(pretrained = FALSE, progress)
densenet121(pretrained = FALSE, progress)
pretrained |
pretrained or not |
progress |
to see progress bar or not |
"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>
model
Densenet161
densenet161(pretrained = FALSE, progress)
densenet161(pretrained = FALSE, progress)
pretrained |
pretrained or not |
progress |
to see progress bar or not |
"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>
model
Densenet169
densenet169(pretrained = FALSE, progress)
densenet169(pretrained = FALSE, progress)
pretrained |
pretrained or not |
progress |
to see progress bar or not |
"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>
model
Densenet201
densenet201(pretrained = FALSE, progress)
densenet201(pretrained = FALSE, progress)
pretrained |
pretrained or not |
progress |
to see progress bar or not |
"Densely Connected Convolutional Networks" <https://arxiv.org/pdf/1608.06993.pdf>
model
Resnet block of 'nf' features. 'conv_kwargs' are passed to 'conv_layer'.
DenseResBlock( nf, norm_type = 1, ks = 3, stride = 1, padding = NULL, bias = NULL, ndim = 2, bn_1st = TRUE, act_cls = nn()$ReLU, transpose = FALSE, init = "auto", xtra = NULL, bias_std = 0.01, dilation = 1, groups = 1, padding_mode = "zeros" )
DenseResBlock( nf, norm_type = 1, ks = 3, stride = 1, padding = NULL, bias = NULL, ndim = 2, bn_1st = TRUE, act_cls = nn()$ReLU, transpose = FALSE, init = "auto", xtra = NULL, bias_std = 0.01, dilation = 1, groups = 1, padding_mode = "zeros" )
nf |
number of features |
norm_type |
normalization type |
ks |
kernel size |
stride |
stride |
padding |
padding |
bias |
bias |
ndim |
number of dimensions |
bn_1st |
batch normalization 1st |
act_cls |
activation |
transpose |
transpose |
init |
initizalier |
xtra |
xtra |
bias_std |
bias standard deviation |
dilation |
dilation number |
groups |
groups number |
padding_mode |
padding mode |
block
Plots the value of a variable on the x-axis and the SHAP value of the same variable on the y-axis. Accepts a class_id and variable_name.
dependence_plot(object, variable_name = "", class_id = 0, dpi = 200, ...)
dependence_plot(object, variable_name = "", class_id = 0, dpi = 200, ...)
object |
ShapInterpretation object |
variable_name |
the name of the column |
class_id |
is used to indicate the class of interest for a classification model. It can either be an int or str representation for a class of choice. This plot shows how the model depends on the given variable. Vertical dispersion of the datapoints represent interaction effects. Gray ticks along the y-axis are datapoints where the variable's values were NaN. |
dpi |
dots per inch |
... |
additional arguments |
None
Apply a random dihedral transformation to a batch of images with a probability 'p'
DeterministicDihedral( size = NULL, mode = "bilinear", pad_mode = "reflection", align_corners = NULL )
DeterministicDihedral( size = NULL, mode = "bilinear", pad_mode = "reflection", align_corners = NULL )
size |
size |
mode |
mode |
pad_mode |
padding mode |
align_corners |
align corners |
None
DeterministicDraw
DeterministicDraw(vals)
DeterministicDraw(vals)
vals |
values |
None
Flip the batch every other call
DeterministicFlip( size = NULL, mode = "bilinear", pad_mode = "reflection", align_corners = TRUE, ... )
DeterministicFlip( size = NULL, mode = "bilinear", pad_mode = "reflection", align_corners = TRUE, ... )
size |
size |
mode |
mode |
pad_mode |
padding mode |
align_corners |
align corners |
... |
parameters to pass |
None
Detuplify_pg
detuplify_pg(d)
detuplify_pg(d)
d |
d |
None
Dice coefficient metric for binary target in segmentation
Dice(axis = 1)
Dice(axis = 1)
axis |
axis |
None
Dicom_windows module
dicom_windows()
dicom_windows()
None
Apply a random dihedral transformation to a batch of images with a probability 'p'
Apply a random dihedral transformation to a batch of images with a probability 'p'
Dihedral( p = 0.5, draw = NULL, size = NULL, mode = "bilinear", pad_mode = "reflection", align_corners = NULL, batch = FALSE ) Dihedral( p = 0.5, draw = NULL, size = NULL, mode = "bilinear", pad_mode = "reflection", align_corners = NULL, batch = FALSE )
Dihedral( p = 0.5, draw = NULL, size = NULL, mode = "bilinear", pad_mode = "reflection", align_corners = NULL, batch = FALSE ) Dihedral( p = 0.5, draw = NULL, size = NULL, mode = "bilinear", pad_mode = "reflection", align_corners = NULL, batch = FALSE )
p |
probability |
draw |
draw |
size |
size |
mode |
mode |
pad_mode |
padding mode |
align_corners |
align corners |
batch |
batch |
None
None
Return a random dihedral matrix
dihedral_mat(x, p = 0.5, draw = NULL, batch = FALSE)
dihedral_mat(x, p = 0.5, draw = NULL, batch = FALSE)
x |
tensor |
p |
probability |
draw |
draw |
batch |
batch |
None
Randomly flip with probability 'p'
DihedralItem(p = 1, nm = NULL, before_call = NULL)
DihedralItem(p = 1, nm = NULL, before_call = NULL)
p |
probability |
nm |
nm |
before_call |
before call |
None
Dim
## S3 method for class 'torch.Tensor' dim(x)
## S3 method for class 'torch.Tensor' dim(x)
x |
tensor |
tensor
Dim
## S3 method for class 'fastai.torch_core.TensorMask' dim(x)
## S3 method for class 'fastai.torch_core.TensorMask' dim(x)
x |
tensor |
tensor
Discriminator
discriminator( ch_in, n_ftrs = 64, n_layers = 3, norm_layer = NULL, sigmoid = FALSE )
discriminator( ch_in, n_ftrs = 64, n_layers = 3, norm_layer = NULL, sigmoid = FALSE )
ch_in |
input |
n_ftrs |
number of filters |
n_layers |
number of layers |
norm_layer |
normalization layer |
sigmoid |
apply sigmoid function or not |
Div
## S3 method for class 'torch.Tensor' a / b
## S3 method for class 'torch.Tensor' a / b
a |
tensor |
b |
tensor |
tensor
Transform multichannel audios into single channel
DownmixMono(enc = NULL, dec = NULL, split_idx = NULL, order = NULL)
DownmixMono(enc = NULL, dec = NULL, split_idx = NULL, order = NULL)
enc |
encoder |
dec |
decoder |
split_idx |
split by index |
order |
order, by default is NULL |
None
Return a dropout mask of the same type as 'x', size 'sz', with probability 'p' to cancel an element.
dropout_mask(x, sz, p)
dropout_mask(x, sz, p)
x |
x |
sz |
sz |
p |
p |
None
Evaluate 'm' on a dummy input of a certain 'size'
dummy_eval(m, size = list(64, 64))
dummy_eval(m, size = list(64, 64))
m |
m parameter |
size |
size |
None
Create a U-Net from a given architecture.
DynamicUnet( encoder, n_classes, img_size, blur = FALSE, blur_final = TRUE, self_attention = FALSE, y_range = NULL, last_cross = TRUE, bottle = FALSE, act_cls = nn()$ReLU, init = nn()$init$kaiming_normal_, norm_type = NULL )
DynamicUnet( encoder, n_classes, img_size, blur = FALSE, blur_final = TRUE, self_attention = FALSE, y_range = NULL, last_cross = TRUE, bottle = FALSE, act_cls = nn()$ReLU, init = nn()$init$kaiming_normal_, norm_type = NULL )
encoder |
encoder |
n_classes |
number of classes |
img_size |
image size |
blur |
blur is used to avoid checkerboard artifacts at each layer. |
blur_final |
blur final is specific to the last layer. |
self_attention |
self_attention determines if we use a self attention layer at the third block before the end. |
y_range |
If y_range is passed, the last activations go through a sigmoid rescaled to that range. |
last_cross |
last cross |
bottle |
bottle |
act_cls |
activation |
init |
initializer |
norm_type |
normalization type |
None
EarlyStoppingCallback
EarlyStoppingCallback(...)
EarlyStoppingCallback(...)
... |
parameters to pass |
None
A 'DataLoader' with a custom 'collate_fn' that batches items as required for inferring the model.
efficientdet_infer_dl(dataset, batch_tfms = NULL, ...)
efficientdet_infer_dl(dataset, batch_tfms = NULL, ...)
dataset |
Possibly a 'Dataset' object, but more generally, any 'Sequence' that returns records. |
batch_tfms |
Transforms to be applied at the batch level. **dataloader_kwargs: Keyword arguments that will be internally passed to a Pytorch 'DataLoader'. The parameter 'collate_fn' is already defined internally and cannot be passed here. |
... |
additional arguments |
None
Fastai 'Learner' adapted for MaskRCNN.
efficientdet_learner(dls, model, cbs = NULL, ...)
efficientdet_learner(dls, model, cbs = NULL, ...)
dls |
'Sequence' of 'DataLoaders' passed to the 'Learner'. The first one will be used for training and the second for validation. |
model |
The model to train. |
cbs |
Optional 'Sequence' of callbacks. |
... |
learner_kwargs: Keyword arguments that will be internally passed to 'Learner'. |
model
Creates the efficientdet model specified by 'model_name'.
efficientdet_model(model_name, num_classes, img_size, pretrained = TRUE)
efficientdet_model(model_name, num_classes, img_size, pretrained = TRUE)
model_name |
Specifies the model to create. For pretrained models, check [this](https://github.com/rwightman/efficientdet-pytorch#models) table. |
num_classes |
Number of classes of your dataset (including background). |
img_size |
Image size that will be fed to the model. Must be squared and divisible by 128. |
pretrained |
If TRUE, use a pretrained backbone (on COCO). |
model
Efficientdet predict dataloader
efficientdet_predict_dl(model, infer_dl, show_pbar = TRUE)
efficientdet_predict_dl(model, infer_dl, show_pbar = TRUE)
model |
model |
infer_dl |
infer_dl |
show_pbar |
show_pbar |
None
A 'DataLoader' with a custom 'collate_fn' that batches items as required for training the model.
efficientdet_train_dl(dataset, batch_tfms = NULL, ...)
efficientdet_train_dl(dataset, batch_tfms = NULL, ...)
dataset |
Possibly a 'Dataset' object, but more generally, any 'Sequence' that returns records. |
batch_tfms |
Transforms to be applied at the batch level. |
... |
dataloader_kwargs: Keyword arguments that will be internally passed to a Pytorch 'DataLoader'. The parameter 'collate_fn' is already defined internally and cannot be passed here. |
None
A 'DataLoader' with a custom 'collate_fn' that batches items as required for training the model.
efficientdet_valid_dl(dataset, batch_tfms = NULL, ...)
efficientdet_valid_dl(dataset, batch_tfms = NULL, ...)
dataset |
Possibly a 'Dataset' object, but more generally, any 'Sequence' that returns records. |
batch_tfms |
Transforms to be applied at the batch level. |
... |
dataloader_kwargs: Keyword arguments that will be internally passed to a Pytorch 'DataLoader'. The parameter 'collate_fn' is already defined internally and cannot be passed here. |
None
Rule of thumb to pick embedding size corresponding to 'n_cat'
emb_sz_rule(n_cat)
emb_sz_rule(n_cat)
n_cat |
n_cat |
None
Embedding layer with truncated normal initialization
Embedding(ni, nf)
Embedding(ni, nf)
ni |
inputs |
nf |
outputs / number of features |
None
Apply dropout with probability 'embed_p' to an embedding layer 'emb'.
EmbeddingDropout(emb, embed_p)
EmbeddingDropout(emb, embed_p)
emb |
emb |
embed_p |
embed_p |
None
1 - 'accuracy'
error_rate(inp, targ, axis = -1)
error_rate(inp, targ, axis = -1)
inp |
The predictions of the model |
targ |
The corresponding labels |
axis |
Axis |
tensor
## Not run: learn = cnn_learner(dls, resnet34(), metrics = error_rate) ## End(Not run)
## Not run: learn = cnn_learner(dls, resnet34(), metrics = error_rate) ## End(Not run)
Exp
## S3 method for class 'torch.Tensor' exp(x)
## S3 method for class 'torch.Tensor' exp(x)
x |
tensor |
tensor
Root mean square percentage error of the exponential of predictions and targets
exp_rmspe(preds, targs)
exp_rmspe(preds, targs)
preds |
predicitons |
targs |
targets |
None
Exp
## S3 method for class 'fastai.torch_core.TensorMask' exp(x)
## S3 method for class 'fastai.torch_core.TensorMask' exp(x)
x |
tensor |
tensor
Explained variance between predictions and targets
ExplainedVariance(sample_weight = NULL)
ExplainedVariance(sample_weight = NULL)
sample_weight |
sample_weight |
None
Expm1
## S3 method for class 'torch.Tensor' expm1(x)
## S3 method for class 'torch.Tensor' expm1(x)
x |
tensor |
tensor
Expm1
## S3 method for class 'fastai.torch_core.TensorMask' expm1(x)
## S3 method for class 'fastai.torch_core.TensorMask' expm1(x)
x |
tensor |
tensor
Export_generator
export_generator( learn, generator_name = "generator", path = ".", convert_to = "B" )
export_generator( learn, generator_name = "generator", path = ".", convert_to = "B" )
learn |
learner/model |
generator_name |
generator name |
path |
path (save dir) |
convert_to |
convert to |
None
F1 score for single-label classification problems
F1Score( axis = -1, labels = NULL, pos_label = 1, average = "binary", sample_weight = NULL )
F1Score( axis = -1, labels = NULL, pos_label = 1, average = "binary", sample_weight = NULL )
axis |
axis |
labels |
labels |
pos_label |
pos_label |
average |
average |
sample_weight |
sample_weight |
None
F1 score for multi-label classification problems
F1ScoreMulti( thresh = 0.5, sigmoid = TRUE, labels = NULL, pos_label = 1, average = "macro", sample_weight = NULL )
F1ScoreMulti( thresh = 0.5, sigmoid = TRUE, labels = NULL, pos_label = 1, average = "macro", sample_weight = NULL )
thresh |
thresh |
sigmoid |
sigmoid |
labels |
labels |
pos_label |
pos_label |
average |
average |
sample_weight |
sample_weight |
None
A 'DataLoader' with a custom 'collate_fn' that batches items as required for inferring the model.
faster_rcnn_infer_dl(dataset, batch_tfms = NULL, ...)
faster_rcnn_infer_dl(dataset, batch_tfms = NULL, ...)
dataset |
Possibly a 'Dataset' object, but more generally, any 'Sequence' that returns records. |
batch_tfms |
Transforms to be applied at the batch level. **dataloader_kwargs: Keyword arguments that will be internally passed to a Pytorch 'DataLoader'. The parameter 'collate_fn' is already defined internally and cannot be passed here. |
... |
additional arguments |
None
Fastai 'Learner' adapted for Faster RCNN.
faster_rcnn_learner(dls, model, cbs = NULL, ...)
faster_rcnn_learner(dls, model, cbs = NULL, ...)
dls |
'Sequence' of 'DataLoaders' passed to the 'Learner'. The first one will be used for training and the second for validation. |
model |
The model to train. |
cbs |
Optional 'Sequence' of callbacks. |
... |
learner_kwargs: Keyword arguments that will be internally passed to 'Learner'. |
model
FasterRCNN model implemented by torchvision.
faster_rcnn_model( num_classes, backbone = NULL, remove_internal_transforms = TRUE, pretrained = TRUE )
faster_rcnn_model( num_classes, backbone = NULL, remove_internal_transforms = TRUE, pretrained = TRUE )
num_classes |
Number of classes. |
backbone |
Backbone model to use. Defaults to a resnet50_fpn model. |
remove_internal_transforms |
The torchvision model internally applies transforms like resizing and normalization, but we already do this at the ‘Dataset' level, so it’s safe to remove those internal transforms. |
pretrained |
Argument passed to 'fastercnn_resnet50_fpn' if 'backbone is NULL'. By default it is set to TRUE: this is generally used when training a new model (transfer learning). 'pretrained = FALSE' is used during inference (prediction) for cases where the users have their own pretrained weights. **faster_rcnn_kwargs: Keyword arguments that internally are going to be passed to 'torchvision.models.detection.faster_rcnn.FastRCNN'. |
model
Faster RCNN predict dataloader
faster_rcnn_predict_dl(model, infer_dl, show_pbar = TRUE)
faster_rcnn_predict_dl(model, infer_dl, show_pbar = TRUE)
model |
model |
infer_dl |
infer_dl |
show_pbar |
show_pbar |
None
A 'DataLoader' with a custom 'collate_fn' that batches items as required for training the model.
faster_rcnn_train_dl(dataset, batch_tfms = NULL, ...)
faster_rcnn_train_dl(dataset, batch_tfms = NULL, ...)
dataset |
Possibly a 'Dataset' object, but more generally, any 'Sequence' that returns records. |
batch_tfms |
Transforms to be applied at the batch level. |
... |
dataloader_kwargs: Keyword arguments that will be internally passed to a Pytorch 'DataLoader'. The parameter 'collate_fn' is already defined internally and cannot be passed here. |
None
A 'DataLoader' with a custom 'collate_fn' that batches items as required for training the model.
faster_rcnn_valid_dl(dataset, batch_tfms = NULL, ...)
faster_rcnn_valid_dl(dataset, batch_tfms = NULL, ...)
dataset |
Possibly a 'Dataset' object, but more generally, any 'Sequence' that returns records. |
batch_tfms |
Transforms to be applied at the batch level. |
... |
dataloader_kwargs: Keyword arguments that will be internally passed to a Pytorch 'DataLoader'. The parameter 'collate_fn' is already defined internally and cannot be passed here. |
None
FBeta score with 'beta' for single-label classification problems
FBeta( beta, axis = -1, labels = NULL, pos_label = 1, average = "binary", sample_weight = NULL )
FBeta( beta, axis = -1, labels = NULL, pos_label = 1, average = "binary", sample_weight = NULL )
beta |
beta |
axis |
axis |
labels |
labels |
pos_label |
pos_label |
average |
average |
sample_weight |
sample_weight |
None
FBeta score with 'beta' for multi-label classification problems
FBetaMulti( beta, thresh = 0.5, sigmoid = TRUE, labels = NULL, pos_label = 1, average = "macro", sample_weight = NULL )
FBetaMulti( beta, thresh = 0.5, sigmoid = TRUE, labels = NULL, pos_label = 1, average = "macro", sample_weight = NULL )
beta |
beta |
thresh |
thresh |
sigmoid |
sigmoid |
labels |
labels |
pos_label |
pos_label |
average |
average |
sample_weight |
sample_weight |
None
A callback to fetch predictions during the training loop
FetchPredsCallback( ds_idx = 1, dl = NULL, with_input = FALSE, with_decoded = FALSE, cbs = NULL, reorder = TRUE )
FetchPredsCallback( ds_idx = 1, dl = NULL, with_input = FALSE, with_decoded = FALSE, cbs = NULL, reorder = TRUE )
ds_idx |
dataset index |
dl |
DL application |
with_input |
with input or not |
with_decoded |
with decoded or not |
cbs |
callbacks |
reorder |
reorder or not |
None
Split 'items' by providing file 'fname' (contains names of valid items separated by newline).
FileSplitter(fname)
FileSplitter(fname)
fname |
file name |
None
Fill the missing values in continuous columns.
FillMissing( cat_names, cont_names, fill_strategy = FillStrategy_MEDIAN(), add_col = TRUE, fill_val = 0 )
FillMissing( cat_names, cont_names, fill_strategy = FillStrategy_MEDIAN(), add_col = TRUE, fill_val = 0 )
cat_names |
The names of the categorical variables |
cont_names |
The names of the continuous variables |
fill_strategy |
The strategy of filling |
add_col |
add_col |
fill_val |
fill_val |
None
## Not run: procs = list(FillMissing(),Categorify(),Normalize()) ## End(Not run)
## Not run: procs = list(FillMissing(),Categorify(),Normalize()) ## End(Not run)
An enumeration.
FillStrategy_CONSTANT()
FillStrategy_CONSTANT()
None
Find coefficients for warp tfm from 'p1' to 'p2'
find_coeffs(p1, p2)
find_coeffs(p1, p2)
p1 |
coefficient p1 |
p2 |
coefficient p2 |
None
Fine tune with 'freeze' for 'freeze_epochs' then with 'unfreeze' from 'epochs' using discriminative LR
fine_tune( object, epochs, base_lr = 0.002, freeze_epochs = 1, lr_mult = 100, pct_start = 0.3, div = 5, ... )
fine_tune( object, epochs, base_lr = 0.002, freeze_epochs = 1, lr_mult = 100, pct_start = 0.3, div = 5, ... )
object |
learner/model |
epochs |
epoch number |
base_lr |
base learning rate |
freeze_epochs |
freeze epochs number |
lr_mult |
learning rate multiply |
pct_start |
start percentage |
div |
divide |
... |
additional arguments |
None
Fit_flat_cos
fit_flat_cos( object, n_epoch, lr = NULL, div_final = 1e+05, pct_start = 0.75, wd = NULL, cbs = NULL, reset_opt = FALSE )
fit_flat_cos( object, n_epoch, lr = NULL, div_final = 1e+05, pct_start = 0.75, wd = NULL, cbs = NULL, reset_opt = FALSE )
object |
learner/model |
n_epoch |
number of epochs |
lr |
learning rate |
div_final |
divide final value |
pct_start |
start percentage |
wd |
weight decay |
cbs |
callbacks |
reset_opt |
reset optimizer |
None
Fit 'self.model' for 'n_epoch' at flat 'start_lr' before 'curve_type' annealing to 'end_lr' with weight decay of 'wd' and callbacks 'cbs'.
fit_flat_lin( object, n_epochs = 100, n_epochs_decay = 100, start_lr = NULL, end_lr = 0, curve_type = "linear", wd = NULL, cbs = NULL, reset_opt = FALSE )
fit_flat_lin( object, n_epochs = 100, n_epochs_decay = 100, start_lr = NULL, end_lr = 0, curve_type = "linear", wd = NULL, cbs = NULL, reset_opt = FALSE )
object |
model / learner |
n_epochs |
number of epochs |
n_epochs_decay |
number of epochs with decay |
start_lr |
Desired starting learning rate, used for beginning pct of training. |
end_lr |
Desired end learning rate, training will conclude at this learning rate. |
curve_type |
Curve type for learning rate annealing. Options are 'linear', 'cosine', and 'exponential'. |
wd |
weight decay |
cbs |
callbacks |
reset_opt |
reset optimizer |
None
Fit one cycle
fit_one_cycle(object, ...)
fit_one_cycle(object, ...)
object |
model |
... |
parameters to pass, e.g. lr, n_epoch, wd, and etc. |
None
Fit_sgdr
fit_sgdr( object, n_cycles, cycle_len, lr_max = NULL, cycle_mult = 2, cbs = NULL, reset_opt = FALSE, wd = NULL )
fit_sgdr( object, n_cycles, cycle_len, lr_max = NULL, cycle_mult = 2, cbs = NULL, reset_opt = FALSE, wd = NULL )
object |
learner/model |
n_cycles |
number of cycles |
cycle_len |
length of cycle |
lr_max |
maximum learning rate |
cycle_mult |
cycle mult |
cbs |
callbacks |
reset_opt |
reset optimizer |
wd |
weight decay |
None
Fit the model on this learner with 'lr' learning rate, 'wd' weight decay for 'epochs' with 'callbacks' as cbs argument.
## S3 method for class 'fastai.learner.Learner' fit(object, ...)
## S3 method for class 'fastai.learner.Learner' fit(object, ...)
object |
a learner object |
... |
parameters to pass |
train history
Fit the model on this learner with 'lr' learning rate, 'wd' weight decay for 'epochs' with 'callbacks'.
## S3 method for class 'fastai.tabular.learner.TabularLearner' fit(object, ...)
## S3 method for class 'fastai.tabular.learner.TabularLearner' fit(object, ...)
object |
model |
... |
additional arguments |
data frame
Fit the model on this learner with 'lr' learning rate, 'wd' weight decay for 'epochs' with 'callbacks'.
## S3 method for class 'fastai.vision.gan.GANLearner' fit(object, ...)
## S3 method for class 'fastai.vision.gan.GANLearner' fit(object, ...)
object |
model |
... |
additonal parameters to pass |
train history
## Not run: learn %>% fit(1, 2e-4, wd = 0) ## End(Not run)
## Not run: learn %>% fit(1, 2e-4, wd = 0) ## End(Not run)
Fix fit
fix_fit(disable_graph = FALSE)
fix_fit(disable_graph = FALSE)
disable_graph |
to remove dynamic plot, by default is FALSE |
None
Various messy things we've seen in documents
fix_html(x)
fix_html(x)
x |
text |
string
Switcher to do 'n_crit' iterations of the critic then 'n_gen' iterations of the generator.
FixedGANSwitcher(n_crit = 1, n_gen = 1)
FixedGANSwitcher(n_crit = 1, n_gen = 1)
n_crit |
number of discriminator |
n_gen |
number of generator |
None
Flatten 'x' to a single dimension, e.g. at end of a model. 'full' for rank-1 tensor
Flatten(full = FALSE)
Flatten(full = FALSE)
full |
bool, full or not |
Check that 'out' and 'targ' have the same number of elements and flatten them.
flatten_check(inp, targ)
flatten_check(inp, targ)
inp |
predictions |
targ |
targets |
tensor
Return the list of all submodules and parameters of 'm'
flatten_model(m)
flatten_model(m)
m |
parameters |
None
Randomly flip a batch of images with a probability 'p'
Flip( p = 0.5, draw = NULL, size = NULL, mode = "bilinear", pad_mode = "reflection", align_corners = TRUE, batch = FALSE )
Flip( p = 0.5, draw = NULL, size = NULL, mode = "bilinear", pad_mode = "reflection", align_corners = TRUE, batch = FALSE )
p |
probability |
draw |
draw |
size |
size of image |
mode |
mode |
pad_mode |
reflection, zeros, border as string parameter |
align_corners |
align corners ot not |
batch |
batch or not |
None
Return a random flip matrix
flip_mat(x, p = 0.5, draw = NULL, batch = FALSE)
flip_mat(x, p = 0.5, draw = NULL, batch = FALSE)
x |
tensor |
p |
probability |
draw |
draw |
batch |
batch |
None
Randomly flip with probability 'p'
FlipItem(p = 0.5)
FlipItem(p = 0.5)
p |
probability |
None
Tensor to float
float(tensor)
float(tensor)
tensor |
tensor |
tensor
Floor
## S3 method for class 'torch.Tensor' floor(x)
## S3 method for class 'torch.Tensor' floor(x)
x |
tensor |
tensor
Floor divide
## S3 method for class 'torch.Tensor' x %/% y
## S3 method for class 'torch.Tensor' x %/% y
x |
tensor |
y |
tensor |
tensor
Floor mod
## S3 method for class 'torch.Tensor' x %% y
## S3 method for class 'torch.Tensor' x %% y
x |
tensor |
y |
tensor |
tensor
Floor
## S3 method for class 'fastai.torch_core.TensorMask' floor(x)
## S3 method for class 'fastai.torch_core.TensorMask' floor(x)
x |
tensor |
tensor
Module
fmodule(...)
fmodule(...)
... |
parameters to pass |
Decorator to create an nn()$Module using f as forward method
None
A PyTorch Dataset class that can be created from a folder 'path' of images, for the sole purpose of inference. Optional 'transforms'
FolderDataset(path, transforms = NULL)
FolderDataset(path, transforms = NULL)
path |
path to dir |
transforms |
transformations |
can be provided. Attributes: 'self.files': A list of the filenames in the folder. 'self.totensor': 'torchvision.transforms.ToTensor' transform. 'self.transform': The transforms passed in as 'transforms' to the constructor.
None
Visualizes the SHAP values with an added force layout. Accepts a class_id which is used to indicate the class of interest for a classification model.
force_plot(object, class_id = 0, ...)
force_plot(object, class_id = 0, ...)
object |
ShapInterpretation object |
class_id |
Accepts a class_id which is used to indicate the class of interest for a classification model. It can either be an int or str representation for a class of choice. |
... |
additional arguments |
None
Computes non-background accuracy for multiclass segmentation
foreground_acc(inp, targ, bkg_idx = 0, axis = 1)
foreground_acc(inp, targ, bkg_idx = 0, axis = 1)
inp |
predictions |
targ |
targets |
bkg_idx |
bkg_idx |
axis |
axis |
None
ForgetMult gate applied to 'x' and 'f' on the CPU.
forget_mult_CPU(x, f, first_h = NULL, batch_first = TRUE, backward = FALSE)
forget_mult_CPU(x, f, first_h = NULL, batch_first = TRUE, backward = FALSE)
x |
x |
f |
f |
first_h |
first_h |
batch_first |
batch_first |
backward |
backward |
None
Wrapper around the CUDA kernels for the ForgetMult gate.
ForgetMultGPU(...)
ForgetMultGPU(...)
... |
parameters to pass |
None
Freeze a model
freeze(object, ...)
freeze(object, ...)
object |
A model |
... |
Additional parameters |
None
## Not run: learnR %>% freeze() ## End(Not run)
## Not run: learnR %>% freeze() ## End(Not run)
Split 'items' by result of 'func' ('TRUE' for validation, 'FALSE' for training set).
FuncSplitter(func)
FuncSplitter(func)
func |
function |
None
Reshape x to size
fView(...)
fView(...)
... |
parameters to pass |
None
Critic to train a 'GAN'.
gan_critic(n_channels = 3, nf = 128, n_blocks = 3, p = 0.15)
gan_critic(n_channels = 3, nf = 128, n_blocks = 3, p = 0.15)
n_channels |
number of channels |
nf |
number of features |
n_blocks |
number of blocks |
p |
probability |
GAN object
Define loss functions for a GAN from 'loss_gen' and 'loss_crit'.
gan_loss_from_func(loss_gen, loss_crit, weights_gen = NULL)
gan_loss_from_func(loss_gen, loss_crit, weights_gen = NULL)
loss_gen |
generator loss |
loss_crit |
discriminator loss |
weights_gen |
weight generator |
None
'Callback' that handles multiplying the learning rate by 'mult_lr' for the critic.
GANDiscriminativeLR(mult_lr = 5)
GANDiscriminativeLR(mult_lr = 5)
mult_lr |
mult learning rate |
Create a GAN from 'learn_gen' and 'learn_crit'.
GANLearner_from_learners( gen_learn, crit_learn, switcher = NULL, weights_gen = NULL, gen_first = FALSE, switch_eval = TRUE, show_img = TRUE, clip = NULL, cbs = NULL, metrics = NULL, loss_func = NULL, opt_func = Adam(), lr = 0.001, splitter = trainable_params(), path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE, moms = list(0.95, 0.85, 0.95) )
GANLearner_from_learners( gen_learn, crit_learn, switcher = NULL, weights_gen = NULL, gen_first = FALSE, switch_eval = TRUE, show_img = TRUE, clip = NULL, cbs = NULL, metrics = NULL, loss_func = NULL, opt_func = Adam(), lr = 0.001, splitter = trainable_params(), path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE, moms = list(0.95, 0.85, 0.95) )
gen_learn |
generator learner |
crit_learn |
discriminator learner |
switcher |
switcher |
weights_gen |
weights generator |
gen_first |
generator first |
switch_eval |
switch evaluation |
show_img |
show image or not |
clip |
clip value |
cbs |
Cbs is one or a list of Callbacks to pass to the Learner. |
metrics |
It is an optional list of metrics, that can be either functions or Metrics. |
loss_func |
loss function |
opt_func |
The function used to create the optimizer |
lr |
learning rate |
splitter |
It is a function that takes self.model and returns a list of parameter groups (or just one parameter group if there are no different parameter groups). |
path |
The folder where to work |
model_dir |
Path and model_dir are used to save and/or load models. |
wd |
It is the default weight decay used when training the model. |
wd_bn_bias |
It controls if weight decay is applied to BatchNorm layers and bias. |
train_bn |
It controls if BatchNorm layers are trained even when they are supposed to be frozen according to the splitter. |
moms |
The default momentums used in Learner$fit_one_cycle. |
None
Create a WGAN from 'data', 'generator' and 'critic'.
GANLearner_wgan( dls, generator, critic, switcher = NULL, clip = 0.01, switch_eval = FALSE, gen_first = FALSE, show_img = TRUE, cbs = NULL, metrics = NULL, opt_func = Adam(), lr = 0.001, splitter = trainable_params, path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE, moms = list(0.95, 0.85, 0.95) )
GANLearner_wgan( dls, generator, critic, switcher = NULL, clip = 0.01, switch_eval = FALSE, gen_first = FALSE, show_img = TRUE, cbs = NULL, metrics = NULL, opt_func = Adam(), lr = 0.001, splitter = trainable_params, path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE, moms = list(0.95, 0.85, 0.95) )
dls |
dataloader |
generator |
generator |
critic |
critic |
switcher |
switcher |
clip |
clip value |
switch_eval |
switch evaluation |
gen_first |
generator first |
show_img |
show image or not |
cbs |
callbacks |
metrics |
metrics |
opt_func |
optimization function |
lr |
learning rate |
splitter |
splitter |
path |
path |
model_dir |
model directory |
wd |
weight decay |
wd_bn_bias |
weight decay bn bias |
train_bn |
It controls if BatchNorm layers are trained even when they are supposed to be frozen according to the splitter. |
moms |
momentums |
None
## Not run: learn = GANLearner_wgan(dls, generator, critic, opt_func = partial(Adam(), mom=0.)) ## End(Not run)
## Not run: learn = GANLearner_wgan(dls, generator, critic, opt_func = partial(Adam(), mom=0.)) ## End(Not run)
Wrapper around 'crit_loss_func' and 'gen_loss_func'
GANLoss(gen_loss_func, crit_loss_func, gan_model)
GANLoss(gen_loss_func, crit_loss_func, gan_model)
gen_loss_func |
generator loss funcion |
crit_loss_func |
discriminator loss function |
gan_model |
GAN model |
None
Wrapper around a 'generator' and a 'critic' to create a GAN.
GANModule(generator = NULL, critic = NULL, gen_mode = FALSE)
GANModule(generator = NULL, critic = NULL, gen_mode = FALSE)
generator |
generator |
critic |
critic |
gen_mode |
generator mode or not |
None
Handles GAN Training.
GANTrainer( switch_eval = FALSE, clip = NULL, beta = 0.98, gen_first = FALSE, show_img = TRUE )
GANTrainer( switch_eval = FALSE, clip = NULL, beta = 0.98, gen_first = FALSE, show_img = TRUE )
switch_eval |
switch evaluation |
clip |
clip value |
beta |
beta parameter |
gen_first |
generator first |
show_img |
show image or not |
None
'Callback' that saves the predictions and targets, optionally 'with_loss'
GatherPredsCallback( with_input = FALSE, with_loss = FALSE, save_preds = NULL, save_targs = NULL, concat_dim = 0 )
GatherPredsCallback( with_input = FALSE, with_loss = FALSE, save_preds = NULL, save_targs = NULL, concat_dim = 0 )
with_input |
include inputs or not |
with_loss |
include loss or not |
save_preds |
save predictions |
save_targs |
save targets/actuals |
concat_dim |
concatenate dimensions |
None
Apply gaussian_blur2d kornia filter
gauss_blur2d(x, s)
gauss_blur2d(x, s)
x |
image |
s |
effect |
None
Generate noise
generate_noise(fn, size = 100)
generate_noise(fn, size = 100)
fn |
path |
size |
the size |
None
## Not run: generate_noise() ## End(Not run)
## Not run: generate_noise() ## End(Not run)
Open a COCO style json in 'fname' and returns the lists of filenames (with maybe 'prefix') and labelled bboxes.
get_annotations(fname, prefix = NULL)
get_annotations(fname, prefix = NULL)
fname |
folder name |
prefix |
prefix |
None
Get audio files in 'path' recursively, only in 'folders', if specified.
get_audio_files(path, recurse = TRUE, folders = NULL)
get_audio_files(path, recurse = TRUE, folders = NULL)
path |
path |
recurse |
recursive or not |
folders |
vector, folders |
None
Bias for item or user (based on 'is_item') for all in 'arr'
get_bias(object, arr, is_item = TRUE, convert = TRUE)
get_bias(object, arr, is_item = TRUE, convert = TRUE)
object |
extract bias |
arr |
R data frame |
is_item |
logical, is item |
convert |
to R matrix |
tensor
## Not run: movie_bias = learn %>% get_bias(top_movies, is_item = TRUE) ## End(Not run)
## Not run: movie_bias = learn %>% get_bias(top_movies, is_item = TRUE) ## End(Not run)
Get_c
get_c(dls)
get_c(dls)
dls |
dataloader object |
number of layers
## Not run: get_c(dls) ## End(Not run)
## Not run: get_c(dls) ## End(Not run)
Extract confusion matrix
get_confusion_matrix(object)
get_confusion_matrix(object)
object |
model |
matrix
## Not run: model %>% get_confusion_matrix() ## End(Not run)
## Not run: model %>% get_confusion_matrix() ## End(Not run)
Get data loaders
get_data_loaders(train_batch_size, val_batch_size)
get_data_loaders(train_batch_size, val_batch_size)
train_batch_size |
train dataset batch size |
val_batch_size |
validation dataset batch size |
None
Get image matrix
get_dcm_matrix(img, type = "raw", scan = "", size = 50, convert = TRUE)
get_dcm_matrix(img, type = "raw", scan = "", size = 50, convert = TRUE)
img |
dicom file |
type |
img transformation |
scan |
apply uniform or gaussian blur effects |
size |
size of image |
convert |
to R matrix or keep tensor |
tensor
## Not run: img = dcmread('hemorrhage.dcm') img %>% get_dcm_matrix(type = 'raw') ## End(Not run)
## Not run: img = dcmread('hemorrhage.dcm') img %>% get_dcm_matrix(type = 'raw') ## End(Not run)
Get dicom files in 'path' recursively, only in 'folders', if specified.
get_dicom_files(path, recurse = TRUE, folders = NULL)
get_dicom_files(path, recurse = TRUE, folders = NULL)
path |
path to files |
recurse |
recursive or not |
folders |
folder names |
lsit of files
## Not run: items = get_dicom_files("siim_small/train/") ## End(Not run)
## Not run: items = get_dicom_files("siim_small/train/") ## End(Not run)
Given image files from two domains ('pathA', 'pathB'), create 'DataLoaders' object.
get_dls( pathA, pathB, num_A = NULL, num_B = NULL, load_size = 512, crop_size = 256, bs = 4, num_workers = 2 )
get_dls( pathA, pathB, num_A = NULL, num_B = NULL, load_size = 512, crop_size = 256, bs = 4, num_workers = 2 )
pathA |
path A (from domain) |
pathB |
path B (to domain) |
num_A |
subset of A data |
num_B |
subset of B data |
load_size |
load size |
crop_size |
crop size |
bs |
bathc size |
num_workers |
number of workers |
Loading and randomly cropped sizes of 'load_size' and 'crop_size' are set to defaults of 512 and 256. Batch size is specified by 'bs' (default=4).
None
Get default embedding size from 'TabularPreprocessor' 'proc' or the ones in 'sz_dict'
get_emb_sz(to, sz_dict = NULL)
get_emb_sz(to, sz_dict = NULL)
to |
to |
sz_dict |
dictionary size |
None
Get all the files in 'path' with optional 'extensions', optionally with 'recurse', only in 'folders', if specified.
get_files( path, extensions = NULL, recurse = TRUE, folders = NULL, followlinks = TRUE )
get_files( path, extensions = NULL, recurse = TRUE, folders = NULL, followlinks = TRUE )
path |
path |
extensions |
extensions |
recurse |
recurse |
folders |
folders |
followlinks |
followlinks |
list
Return a grid of 'n' axes, 'rows' by 'cols'
get_grid( n, nrows = NULL, ncols = NULL, add_vert = 0, figsize = NULL, double = FALSE, title = NULL, return_fig = FALSE, imsize = 3 )
get_grid( n, nrows = NULL, ncols = NULL, add_vert = 0, figsize = NULL, double = FALSE, title = NULL, return_fig = FALSE, imsize = 3 )
n |
n |
nrows |
number of rows |
ncols |
number of columns |
add_vert |
add vertical |
figsize |
figure size |
double |
double |
title |
title |
return_fig |
return figure or not |
imsize |
image size |
None
Returns the architecture (str), config (obj), tokenizer (obj), and model (obj) given at minimum a
get_hf_objects(...)
get_hf_objects(...)
... |
parameters to pass |
'pre-trained model name or path'. Specify a 'task' to ensure the right "AutoModelFor<task>" is used to create the model. Optionally, you can pass a config (obj), tokenizer (class), and/or model (class) (along with any related kwargs for each) to get as specific as you want w/r/t what huggingface objects are returned.
None
Get image files in 'path' recursively, only in 'folders', if specified.
get_image_files(path, recurse = TRUE, folders = NULL)
get_image_files(path, recurse = TRUE, folders = NULL)
path |
The folder where to work |
recurse |
recursive path |
folders |
folder names |
None
## Not run: URLs_PETS() path = 'oxford-iiit-pet' path_img = 'oxford-iiit-pet/images' fnames = get_image_files(path_img) ## End(Not run)
## Not run: URLs_PETS() path = 'oxford-iiit-pet' path_img = 'oxford-iiit-pet/images' fnames = get_image_files(path_img) ## End(Not run)
Create a language model from 'arch' and its 'config'.
get_language_model(arch, vocab_sz, config = NULL, drop_mult = 1)
get_language_model(arch, vocab_sz, config = NULL, drop_mult = 1)
arch |
arch |
vocab_sz |
vocab_sz |
config |
config |
drop_mult |
drop_mult |
model
A prediction function that takes the Learner object 'learn' with the trained model, the 'test_path' folder with the images to perform
get_preds_cyclegan( learn, test_path, pred_path, bs = 4, num_workers = 4, suffix = "tif" )
get_preds_cyclegan( learn, test_path, pred_path, bs = 4, num_workers = 4, suffix = "tif" )
learn |
learner/model |
test_path |
testdat path |
pred_path |
predict data path |
bs |
batch size |
num_workers |
number of workers |
suffix |
suffix |
batch inference on, and the output folder 'pred_path' where the predictions will be saved, with a batch size 'bs', 'num_workers', and suffix of the prediction images ‘suffix' (default=’png').
Create a text classifier from 'arch' and its 'config', maybe 'pretrained'
get_text_classifier( arch, vocab_sz, n_class, seq_len = 72, config = NULL, drop_mult = 1, lin_ftrs = NULL, ps = NULL, pad_idx = 1, max_len = 1440, y_range = NULL )
get_text_classifier( arch, vocab_sz, n_class, seq_len = 72, config = NULL, drop_mult = 1, lin_ftrs = NULL, ps = NULL, pad_idx = 1, max_len = 1440, y_range = NULL )
arch |
arch |
vocab_sz |
vocab_sz |
n_class |
n_class |
seq_len |
seq_len |
config |
config |
drop_mult |
drop_mult |
lin_ftrs |
lin_ftrs |
ps |
ps |
pad_idx |
pad_idx |
max_len |
max_len |
y_range |
y_range |
None
Get text files in 'path' recursively, only in 'folders', if specified.
get_text_files(path, recurse = TRUE, folders = NULL)
get_text_files(path, recurse = TRUE, folders = NULL)
path |
path |
recurse |
recurse |
folders |
folders |
None
Weight for item or user (based on 'is_item') for all in 'arr'
get_weights(object, arr, is_item = TRUE, convert = FALSE)
get_weights(object, arr, is_item = TRUE, convert = FALSE)
object |
extract weights |
arr |
R data frame |
is_item |
logical, is item |
convert |
to R matrix |
tensor
## Not run: movie_w = learn %>% get_weights(top_movies, is_item = TRUE, convert = TRUE) ## End(Not run)
## Not run: movie_w = learn %>% get_weights(top_movies, is_item = TRUE, convert = TRUE) ## End(Not run)
Accumulate gradients before updating weights
GradientAccumulation(n_acc = 32)
GradientAccumulation(n_acc = 32)
n_acc |
number of acc |
None
Split 'items' from the grand parent folder names ('train_name' and 'valid_name').
GrandparentSplitter(train_name = "train", valid_name = "valid")
GrandparentSplitter(train_name = "train", valid_name = "valid")
train_name |
train folder name |
valid_name |
validation folder name |
None
Tensor to grayscale tensor. Uses the ITU-R 601-2 luma transform.
grayscale(x)
grayscale(x)
x |
tensor |
None
Greater
## S3 method for class 'torch.Tensor' a > b
## S3 method for class 'torch.Tensor' a > b
a |
tensor |
b |
tensor |
tensor
Greater or equal
## S3 method for class 'torch.Tensor' a >= b
## S3 method for class 'torch.Tensor' a >= b
a |
tensor |
b |
tensor |
tensor
Hamming loss for single-label classification problems
Hamming loss for single-label classification problems
HammingLoss(axis = -1, sample_weight = NULL) HammingLoss(axis = -1, sample_weight = NULL)
HammingLoss(axis = -1, sample_weight = NULL) HammingLoss(axis = -1, sample_weight = NULL)
axis |
axis |
sample_weight |
sample_weight |
Loss object
None
Hamming loss for multi-label classification problems
HammingLossMulti( thresh = 0.5, sigmoid = TRUE, labels = NULL, sample_weight = NULL )
HammingLossMulti( thresh = 0.5, sigmoid = TRUE, labels = NULL, sample_weight = NULL )
thresh |
threshold |
sigmoid |
sigmoid |
labels |
labels |
sample_weight |
sample_weight |
Loss object
Check if 'm' has at least one parameter
has_params(m)
has_params(m)
m |
m parameter |
None
Return 'TRUE' if 'm' is a pooling layer or has one in its children
has_pool_type(m)
has_pool_type(m)
m |
parameters |
None
A HF_BaseInput object is returned from the decodes method of HF_BatchTransform as a mean to customize '@typedispatched' functions like DataLoaders.show_batch and Learner.show_results. It represents the "input_ids" of a huggingface sequence as a tensor with a show method that requires a huggingface tokenizer for proper display.
HF_BaseInput(...)
HF_BaseInput(...)
... |
parameters to pass |
None
HF_BaseModelCallback
HF_BaseModelCallback(...)
HF_BaseModelCallback(...)
... |
parameters to pass |
None
Same as 'nn.Module', but no need for subclasses to call 'super().__init__'
HF_BaseModelWrapper( hf_model, output_hidden_states = FALSE, output_attentions = FALSE, ... )
HF_BaseModelWrapper( hf_model, output_hidden_states = FALSE, output_attentions = FALSE, ... )
hf_model |
model |
output hidden states |
|
output_attentions |
output attentions |
... |
additional arguments to pass |
None
Handles everything you need to assemble a mini-batch of inputs and targets, as well as decode the dictionary produced as a byproduct of the tokenization process in the 'encodes' method.
HF_BeforeBatchTransform( hf_arch, hf_tokenizer, max_length = NULL, padding = TRUE, truncation = TRUE, is_split_into_words = FALSE, n_tok_inps = 1, ... )
HF_BeforeBatchTransform( hf_arch, hf_tokenizer, max_length = NULL, padding = TRUE, truncation = TRUE, is_split_into_words = FALSE, n_tok_inps = 1, ... )
hf_arch |
architecture |
hf_tokenizer |
tokenizer |
max_length |
maximum length |
padding |
padding or not |
truncation |
truncation or not |
is_split_into_words |
to split into words |
n_tok_inps |
number tok inputs |
... |
additional arguments |
None
Handles everything you need to assemble a mini-batch of inputs and targets, as well as decode the dictionary produced
HF_CausalLMBeforeBatchTransform( hf_arch, hf_tokenizer, max_length = NULL, padding = TRUE, truncation = TRUE, is_split_into_words = FALSE, n_tok_inps = 1, ignore_token_id = -100, ... )
HF_CausalLMBeforeBatchTransform( hf_arch, hf_tokenizer, max_length = NULL, padding = TRUE, truncation = TRUE, is_split_into_words = FALSE, n_tok_inps = 1, ignore_token_id = -100, ... )
hf_arch |
architecture |
hf_tokenizer |
tokenizer |
max_length |
maximum length |
padding |
padding or not |
truncation |
truncation or not |
is_split_into_words |
to split into words |
n_tok_inps |
number tok inputs |
ignore_token_id |
ignore token id |
... |
additional arguments |
as a byproduct of the tokenization process in the 'encodes' method.
None
Load a dataset
HF_load_dataset( path, name = NULL, data_dir = NULL, data_files = NULL, split = NULL, cache_dir = NULL, features = NULL, download_config = NULL, download_mode = NULL, ignore_verifications = FALSE, save_infos = FALSE, script_version = NULL, ... )
HF_load_dataset( path, name = NULL, data_dir = NULL, data_files = NULL, split = NULL, cache_dir = NULL, features = NULL, download_config = NULL, download_mode = NULL, ignore_verifications = FALSE, save_infos = FALSE, script_version = NULL, ... )
path |
path |
name |
name |
data_dir |
dataset dir |
data_files |
dataset files |
split |
split |
cache_dir |
cache directory |
features |
features |
download_config |
download configuration |
download_mode |
download mode |
ignore_verifications |
ignore verifications or not |
save_infos |
save information or not |
script_version |
script version |
... |
additional arguments |
This method does the following under the hood: 1. Download and import in the library the dataset loading script from “path“ if it's not already cached inside the library. Processing scripts are small python scripts that define the citation, info and format of the dataset, contain the URL to the original data files and the code to load examples from the original data files. You can find some of the scripts here: https://github.com/huggingface/datasets/datasets and easily upload yours to share them using the CLI “datasets-cli“. 2. Run the dataset loading script which will: * Download the dataset file from the original URL (see the script) if it's not already downloaded and cached. * Process and cache the dataset in typed Arrow tables for caching. Arrow table are arbitrarily long, typed tables which can store nested objects and be mapped to numpy/pandas/python standard types. They can be directly access from drive, loaded in RAM or even streamed over the web. 3. Return a dataset build from the requested splits in “split“ (default: all).
data frame
Handles everything you need to assemble a mini-batch of inputs and targets, as well as decode the dictionary produced
HF_QABatchTransform( hf_arch, hf_tokenizer, max_length = NULL, padding = TRUE, truncation = TRUE, is_split_into_words = FALSE, n_tok_inps = 1, hf_input_return_type = HF_QuestionAnswerInput(), ... )
HF_QABatchTransform( hf_arch, hf_tokenizer, max_length = NULL, padding = TRUE, truncation = TRUE, is_split_into_words = FALSE, n_tok_inps = 1, hf_input_return_type = HF_QuestionAnswerInput(), ... )
hf_arch |
architecture |
hf_tokenizer |
tokenizer |
max_length |
maximum length |
padding |
padding |
truncation |
truncation |
is_split_into_words |
to split into words or not |
n_tok_inps |
number of tok inputs |
hf_input_return_type |
input return type |
... |
additional arguments |
as a byproduct of the tokenization process in the 'encodes' method.
None
Handles everything you need to assemble a mini-batch of inputs and targets, as well as decode the dictionary produced
HF_QABeforeBatchTransform( hf_arch, hf_tokenizer, max_length = NULL, padding = TRUE, truncation = TRUE, is_split_into_words = FALSE, n_tok_inps = 1, ... )
HF_QABeforeBatchTransform( hf_arch, hf_tokenizer, max_length = NULL, padding = TRUE, truncation = TRUE, is_split_into_words = FALSE, n_tok_inps = 1, ... )
hf_arch |
architecture |
hf_tokenizer |
tokenizer |
max_length |
maximum length |
padding |
padding or not |
truncation |
truncation or not |
is_split_into_words |
into split into words or not |
n_tok_inps |
number of tok inputs |
... |
additional arguments |
as a byproduct of the tokenization process in the 'encodes' method.
None
HF_QstAndAnsModelCallback
HF_QstAndAnsModelCallback(...)
HF_QstAndAnsModelCallback(...)
... |
parameters to pass |
None
HF_QuestionAnswerInput
HF_QuestionAnswerInput(...)
HF_QuestionAnswerInput(...)
... |
parameters to apss |
None
Splits the huggingface model based on various model architecture conventions
hf_splitter(m)
hf_splitter(m)
m |
parameters |
None
Handles everything you need to assemble a mini-batch of inputs and targets, as well as decode the dictionary produced as a byproduct of the tokenization process in the 'encodes' method.
HF_SummarizationBeforeBatchTransform( hf_arch, hf_tokenizer, max_length = NULL, padding = TRUE, truncation = TRUE, is_split_into_words = FALSE, n_tok_inps = 2, ignore_token_id = -100, ... )
HF_SummarizationBeforeBatchTransform( hf_arch, hf_tokenizer, max_length = NULL, padding = TRUE, truncation = TRUE, is_split_into_words = FALSE, n_tok_inps = 2, ignore_token_id = -100, ... )
hf_arch |
architecture |
hf_tokenizer |
tokenizer |
max_length |
maximum length |
padding |
padding or not |
truncation |
truncation or not |
is_split_into_words |
to split into words |
n_tok_inps |
number tok inputs |
ignore_token_id |
ignore token id |
... |
additional arguments |
None
HF_SummarizationInput
HF_SummarizationInput()
HF_SummarizationInput()
None
Basic class handling tweaks of the training loop by changing a 'Learner' in various events
HF_SummarizationModelCallback( rouge_metrics = c("rouge1", "rouge2", "rougeL"), ignore_token_id = -100, ... )
HF_SummarizationModelCallback( rouge_metrics = c("rouge1", "rouge2", "rougeL"), ignore_token_id = -100, ... )
rouge_metrics |
rouge metrics |
ignore_token_id |
integer, ignore token id |
... |
additional arguments |
None
Delegates ('__call__','decode','setup') to (<code>encodes</code>,<code>decodes</code>,<code>setups</code>) if 'split_idx' matches
HF_Text2TextAfterBatchTransform( hf_tokenizer, input_return_type = HF_BaseInput() )
HF_Text2TextAfterBatchTransform( hf_tokenizer, input_return_type = HF_BaseInput() )
hf_tokenizer |
tokenizer |
input_return_type |
input return type |
None
A basic wrapper that links defaults transforms for the data block API
HF_Text2TextBlock(...)
HF_Text2TextBlock(...)
... |
parameters to pass |
None
A basic wrapper that links defaults transforms for the data block API
HF_TextBlock(...)
HF_TextBlock(...)
... |
arguments to pass |
None
Reversible transform of a list of category string to 'vocab' id
HF_TokenCategorize(vocab = NULL, ignore_token = NULL, ignore_token_id = NULL)
HF_TokenCategorize(vocab = NULL, ignore_token = NULL, ignore_token_id = NULL)
vocab |
vocabulary |
ignore_token |
ignore token |
ignore_token_id |
ignore token id |
None
'TransformBlock' for single-label categorical targets
HF_TokenCategoryBlock( vocab = NULL, ignore_token = NULL, ignore_token_id = NULL )
HF_TokenCategoryBlock( vocab = NULL, ignore_token = NULL, ignore_token_id = NULL )
vocab |
vocabulary |
ignore_token |
ignore token |
ignore_token_id |
ignore token id |
None
Handles everything you need to assemble a mini-batch of inputs and targets, as well as decode the dictionary produced
HF_TokenClassBeforeBatchTransform( hf_arch, hf_tokenizer, ignore_token_id = -100, max_length = NULL, padding = TRUE, truncation = TRUE, is_split_into_words = TRUE, n_tok_inps = 1, ... )
HF_TokenClassBeforeBatchTransform( hf_arch, hf_tokenizer, ignore_token_id = -100, max_length = NULL, padding = TRUE, truncation = TRUE, is_split_into_words = TRUE, n_tok_inps = 1, ... )
hf_arch |
architecture |
hf_tokenizer |
tokenizer |
ignore_token_id |
ignore token id |
max_length |
maximum length |
padding |
padding or not |
truncation |
truncation or not |
is_split_into_words |
to split into_words |
n_tok_inps |
number tok inputs |
... |
additional arguments |
as a byproduct of the tokenization process in the 'encodes' method.
None
HF_TokenClassInput
HF_TokenClassInput()
HF_TokenClassInput()
None
HF_TokenTensorCategory
HF_TokenTensorCategory()
HF_TokenTensorCategory()
None
Create a hook on 'm' with 'hook_func'.
Hook( m, hook_func, is_forward = TRUE, detach = TRUE, cpu = FALSE, gather = FALSE )
Hook( m, hook_func, is_forward = TRUE, detach = TRUE, cpu = FALSE, gather = FALSE )
m |
m aprameter |
hook_func |
hook function |
is_forward |
is_forward or not |
detach |
detach or not |
cpu |
cpu or not |
gather |
gather or not |
Hooks are functions you can attach to a particular layer in your model and that will be executed in the forward pass (for forward hooks) or backward pass (for backward hooks).
None
Return a 'Hook' that stores activations of 'module' in 'self$stored'
hook_output(module, detach = TRUE, cpu = FALSE, grad = FALSE)
hook_output(module, detach = TRUE, cpu = FALSE, grad = FALSE)
module |
module |
detach |
detach or not |
cpu |
cpu or not |
grad |
grad or not |
None
Return 'Hooks' that store activations of all 'modules' in 'self.stored'
hook_outputs(modules, detach = TRUE, cpu = FALSE, grad = FALSE)
hook_outputs(modules, detach = TRUE, cpu = FALSE, grad = FALSE)
modules |
modules |
detach |
detach or not |
cpu |
cpu or not |
grad |
grad or not |
None
'Callback' that can be used to register hooks on 'modules'
'Callback' that can be used to register hooks on 'modules'
HookCallback( modules = NULL, every = NULL, remove_end = TRUE, is_forward = TRUE, detach = TRUE, cpu = TRUE ) HookCallback( modules = NULL, every = NULL, remove_end = TRUE, is_forward = TRUE, detach = TRUE, cpu = TRUE )
HookCallback( modules = NULL, every = NULL, remove_end = TRUE, is_forward = TRUE, detach = TRUE, cpu = TRUE ) HookCallback( modules = NULL, every = NULL, remove_end = TRUE, is_forward = TRUE, detach = TRUE, cpu = TRUE )
modules |
modules |
every |
every |
remove_end |
remove_end or not |
is_forward |
is_forward or not |
detach |
detach or not |
cpu |
cpu or not |
None
None
Create several hooks on the modules in 'ms' with 'hook_func'.
Hooks(ms, hook_func, is_forward = TRUE, detach = TRUE, cpu = FALSE)
Hooks(ms, hook_func, is_forward = TRUE, detach = TRUE, cpu = FALSE)
ms |
ms parameter |
hook_func |
hook function |
is_forward |
is_forward or not |
detach |
detach or not |
cpu |
cpu or not |
None
Converts a HSV image to an RGB image.
hsv2rgb(img)
hsv2rgb(img)
img |
image object |
None
Apply change in hue of 'max_hue' to batch of images with probability 'p'.
Hue(max_hue = 0.1, p = 0.75, draw = NULL, batch = FALSE)
Hue(max_hue = 0.1, p = 0.75, draw = NULL, batch = FALSE)
max_hue |
maximum hue |
p |
probability |
draw |
draw |
batch |
batch |
None
Adapter that enables the use of albumentations transforms.
icevision_Adapter(tfms)
icevision_Adapter(tfms)
tfms |
'Sequence' of albumentation transforms. |
None
Collection of useful augmentation transforms.
icevision_aug_tfms( size, presize = NULL, horizontal_flip = icevision_HorizontalFlip(always_apply = FALSE, p = 0.5), shift_scale_rotate = icevision_ShiftScaleRotate(always_apply = FALSE, p = 0.5, shift_limit_x = c(-0.0625, 0.0625), shift_limit_y = c(-0.0625, 0.0625), scale_limit = c(-0.1, 0.1), rotate_limit = c(-45, 45), interpolation = 1, border_mode = 4, value = NULL, mask_value = NULL), rgb_shift = icevision_RGBShift(always_apply = FALSE, p = 0.5, r_shift_limit = c(-20, 20), g_shift_limit = c(-20, 20), b_shift_limit = c(-20, 20)), lightning = icevision_RandomBrightnessContrast(always_apply = FALSE, p = 0.5, brightness_limit = c(-0.2, 0.2), contrast_limit = c(-0.2, 0.2), brightness_by_max = TRUE), blur = icevision_Blur(always_apply = FALSE, p = 0.5, blur_limit = c(1, 3)), crop_fn = partial(icevision_RandomSizedBBoxSafeCrop, p = 0.5), pad = partial(icevision_PadIfNeeded, border_mode = 0, value = list(124, 116, 104)) )
icevision_aug_tfms( size, presize = NULL, horizontal_flip = icevision_HorizontalFlip(always_apply = FALSE, p = 0.5), shift_scale_rotate = icevision_ShiftScaleRotate(always_apply = FALSE, p = 0.5, shift_limit_x = c(-0.0625, 0.0625), shift_limit_y = c(-0.0625, 0.0625), scale_limit = c(-0.1, 0.1), rotate_limit = c(-45, 45), interpolation = 1, border_mode = 4, value = NULL, mask_value = NULL), rgb_shift = icevision_RGBShift(always_apply = FALSE, p = 0.5, r_shift_limit = c(-20, 20), g_shift_limit = c(-20, 20), b_shift_limit = c(-20, 20)), lightning = icevision_RandomBrightnessContrast(always_apply = FALSE, p = 0.5, brightness_limit = c(-0.2, 0.2), contrast_limit = c(-0.2, 0.2), brightness_by_max = TRUE), blur = icevision_Blur(always_apply = FALSE, p = 0.5, blur_limit = c(1, 3)), crop_fn = partial(icevision_RandomSizedBBoxSafeCrop, p = 0.5), pad = partial(icevision_PadIfNeeded, border_mode = 0, value = list(124, 116, 104)) )
size |
The final size of the image. If an 'int' is given, the maximum size of the image is rescaled, maintaing aspect ratio. If a 'list' is given, the image is rescaled to have that exact size (height, width). |
presize |
presize |
horizontal_flip |
Flip around the y-axis. If 'NULL' this transform is not applied. |
shift_scale_rotate |
Randomly shift, scale, and rotate. If 'NULL' this transform is not applied. |
rgb_shift |
Randomly shift values for each channel of RGB image. If 'NULL' this transform is not applied. |
lightning |
Randomly changes Brightness and Contrast. If 'NULL' this transform is not applied. |
blur |
Randomly blur the image. If 'NULL' this transform is not applied. |
crop_fn |
Randomly crop the image. If 'NULL' this transform is not applied. Use 'partial' to saturate other parameters of the class. |
pad |
Pad the image to 'size', squaring the image if 'size' is an 'int'. If 'NULL' this transform is not applied. Use 'partial' to sature other parameters of the class. |
None
BasicIAATransform
icevision_BasicIAATransform(always_apply = FALSE, p = 0.5)
icevision_BasicIAATransform(always_apply = FALSE, p = 0.5)
always_apply |
always_apply |
p |
p |
None
BasicTransform
icevision_BasicTransform(always_apply = FALSE, p = 0.5)
icevision_BasicTransform(always_apply = FALSE, p = 0.5)
always_apply |
always_apply |
p |
p |
None
Blur the input image using a random-sized kernel.
icevision_Blur(blur_limit = 7, always_apply = FALSE, p = 0.5)
icevision_Blur(blur_limit = 7, always_apply = FALSE, p = 0.5)
blur_limit |
blur_limit |
always_apply |
always_apply |
p |
p |
None
image
uint8, float32
Randomly Drop Channels in the input Image.
icevision_ChannelDropout( channel_drop_range = list(1, 1), fill_value = 0, always_apply = FALSE, p = 0.5 )
icevision_ChannelDropout( channel_drop_range = list(1, 1), fill_value = 0, always_apply = FALSE, p = 0.5 )
channel_drop_range |
channel_drop_range |
fill_value |
fill_value |
always_apply |
always_apply |
p |
p |
image
uint8, uint16, unit32, float32
Randomly rearrange channels of the input RGB image.
icevision_ChannelShuffle(always_apply = FALSE, p = 0.5)
icevision_ChannelShuffle(always_apply = FALSE, p = 0.5)
always_apply |
always_apply |
p |
p |
None
image
uint8, float32
Apply Contrast Limited Adaptive Histogram Equalization to the input image.
icevision_CLAHE( clip_limit = 4, tile_grid_size = list(8, 8), always_apply = FALSE, p = 0.5 )
icevision_CLAHE( clip_limit = 4, tile_grid_size = list(8, 8), always_apply = FALSE, p = 0.5 )
clip_limit |
clip_limit |
tile_grid_size |
tile_grid_size |
always_apply |
always_apply |
p |
p |
None
image
uint8
Utility class for mapping between class name and id.
icevision_ClassMap(classes, background = 0)
icevision_ClassMap(classes, background = 0)
classes |
classes |
background |
background |
Python dictionary
CoarseDropout of the rectangular regions in the image.
icevision_CoarseDropout( max_holes = 8, max_height = 8, max_width = 8, min_holes = NULL, min_height = NULL, min_width = NULL, fill_value = 0, mask_fill_value = NULL, always_apply = FALSE, p = 0.5 )
icevision_CoarseDropout( max_holes = 8, max_height = 8, max_width = 8, min_holes = NULL, min_height = NULL, min_width = NULL, fill_value = 0, mask_fill_value = NULL, always_apply = FALSE, p = 0.5 )
max_holes |
max_holes |
max_height |
max_height |
max_width |
max_width |
min_holes |
min_holes |
min_height |
min_height |
min_width |
min_width |
fill_value |
fill_value |
mask_fill_value |
mask_fill_value |
always_apply |
always_apply |
p |
p |
None
image, mask
uint8, float32
| https://arxiv.org/abs/1708.04552 | https://github.com/uoguelph-mlrg/Cutout/blob/master/util/cutout.py | https://github.com/aleju/imgaug/blob/master/imgaug/augmenters/arithmetic.py
Randomly changes the brightness, contrast, and saturation of an image. Compared to ColorJitter from torchvision,
icevision_ColorJitter( brightness = 0.2, contrast = 0.2, saturation = 0.2, hue = 0.2, always_apply = FALSE, p = 0.5 )
icevision_ColorJitter( brightness = 0.2, contrast = 0.2, saturation = 0.2, hue = 0.2, always_apply = FALSE, p = 0.5 )
brightness |
brightness |
contrast |
contrast |
saturation |
saturation |
hue |
hue |
always_apply |
always_apply |
p |
p |
this transform gives a little bit different results because Pillow (used in torchvision) and OpenCV (used in Albumentations) transform an image to HSV format by different formulas. Another difference - Pillow uses uint8 overflow, but we use value saturation.
None
Compose transforms and handle all transformations regrading bounding boxes
icevision_Compose( transforms, bbox_params = NULL, keypoint_params = NULL, additional_targets = NULL, p = 1 )
icevision_Compose( transforms, bbox_params = NULL, keypoint_params = NULL, additional_targets = NULL, p = 1 )
transforms |
transforms |
bbox_params |
bbox_params |
keypoint_params |
keypoint_params |
additional_targets |
additional_targets |
p |
p |
None
Crop region from image.
icevision_Crop( x_min = 0, y_min = 0, x_max = 1024, y_max = 1024, always_apply = FALSE, p = 1 )
icevision_Crop( x_min = 0, y_min = 0, x_max = 1024, y_max = 1024, always_apply = FALSE, p = 1 )
x_min |
x_min |
y_min |
y_min |
x_max |
x_max |
y_max |
y_max |
always_apply |
always_apply |
p |
p |
image, mask, bboxes, keypoints
uint8, float32
Crop area with mask if mask is non-empty, else make random crop.
icevision_CropNonEmptyMaskIfExists( height, width, ignore_values = NULL, ignore_channels = NULL, always_apply = FALSE, p = 1 )
icevision_CropNonEmptyMaskIfExists( height, width, ignore_values = NULL, ignore_channels = NULL, always_apply = FALSE, p = 1 )
height |
height |
width |
width |
ignore_values |
ignore_values |
ignore_channels |
ignore_channels |
always_apply |
always_apply |
p |
p |
None
image, mask, bboxes, keypoints
uint8, float32
CoarseDropout of the square regions in the image.
icevision_Cutout( num_holes = 8, max_h_size = 8, max_w_size = 8, fill_value = 0, always_apply = FALSE, p = 0.5 )
icevision_Cutout( num_holes = 8, max_h_size = 8, max_w_size = 8, fill_value = 0, always_apply = FALSE, p = 0.5 )
num_holes |
num_holes |
max_h_size |
max_h_size |
max_w_size |
max_w_size |
fill_value |
fill_value |
always_apply |
always_apply |
p |
p |
None
image
uint8, float32
| https://arxiv.org/abs/1708.04552 | https://github.com/uoguelph-mlrg/Cutout/blob/master/util/cutout.py | https://github.com/aleju/imgaug/blob/master/imgaug/augmenters/arithmetic.py
Container for a list of records and transforms.
icevision_Dataset(records, tfm = NULL)
icevision_Dataset(records, tfm = NULL)
records |
A list of records. |
tfm |
Transforms to be applied to each item. |
Steps each time an item is requested (normally via directly indexing the 'Dataset'): Grab a record from the internal list of records. Prepare the record (open the image, open the mask, add metadata). Apply transforms to the record.
None
Creates a 'Dataset' from a list of images.
icevision_Dataset_from_images(images, tfm = NULL, ...)
icevision_Dataset_from_images(images, tfm = NULL, ...)
images |
'Sequence' of images in memory (numpy arrays). |
tfm |
Transforms to be applied to each item. |
... |
additional arguments |
None
Decreases image quality by downscaling and upscaling back.
icevision_Downscale( scale_min = 0.25, scale_max = 0.25, interpolation = 0, always_apply = FALSE, p = 0.5 )
icevision_Downscale( scale_min = 0.25, scale_max = 0.25, interpolation = 0, always_apply = FALSE, p = 0.5 )
scale_min |
scale_min |
scale_max |
scale_max |
interpolation |
cv2 interpolation method. cv2.INTER_NEAREST by default |
always_apply |
always_apply |
p |
p |
None
image
uint8, float32
Transform for segmentation task.
icevision_DualIAATransform(always_apply = FALSE, p = 0.5)
icevision_DualIAATransform(always_apply = FALSE, p = 0.5)
always_apply |
always_apply |
p |
p |
None
Transform for segmentation task.
icevision_DualTransform(always_apply = FALSE, p = 0.5)
icevision_DualTransform(always_apply = FALSE, p = 0.5)
always_apply |
always_apply |
p |
p |
None
Elastic deformation of images as described in [Simard2003]_ (with modifications).
icevision_ElasticTransform( alpha = 1, sigma = 50, alpha_affine = 50, interpolation = 1, border_mode = 4, value = NULL, mask_value = NULL, always_apply = FALSE, approximate = FALSE, p = 0.5 )
icevision_ElasticTransform( alpha = 1, sigma = 50, alpha_affine = 50, interpolation = 1, border_mode = 4, value = NULL, mask_value = NULL, always_apply = FALSE, approximate = FALSE, p = 0.5 )
alpha |
alpha |
sigma |
sigma |
alpha_affine |
alpha_affine |
interpolation |
interpolation |
border_mode |
border_mode |
value |
value |
mask_value |
mask_value |
always_apply |
always_apply |
approximate |
approximate |
p |
p |
Based on https://gist.github.com/erniejunior/601cdf56d2b424757de5 .. [Simard2003] Simard, Steinkraus and Platt, "Best Practices for Convolutional Neural Networks applied to Visual Document Analysis", in Proc. of the International Conference on Document Analysis and Recognition, 2003.
None
image, mask
uint8, float32
Equalize the image histogram.
icevision_Equalize(mode = "cv", by_channels = TRUE, mask = NULL, ...)
icevision_Equalize(mode = "cv", by_channels = TRUE, mask = NULL, ...)
mode |
mode |
by_channels |
by_channels |
mask |
mask |
... |
additional arguments |
None
image
uint8
Augment RGB image using FancyPCA from Krizhevsky's paper
icevision_FancyPCA(alpha = 0.1, always_apply = FALSE, p = 0.5)
icevision_FancyPCA(alpha = 0.1, always_apply = FALSE, p = 0.5)
alpha |
alpha |
always_apply |
always_apply |
p |
p |
"ImageNet Classification with Deep Convolutional Neural Networks"
None
image
3-channel uint8 images only
http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf https://deshanadesai.github.io/notes/Fancy-PCA-with-Scikit-Image https://pixelatedbrian.github.io/2018-04-29-fancy_pca/
Fourier Domain Adaptation from https://github.com/YanchaoYang/FDA
icevision_FDA( reference_images, beta_limit = 0.1, read_fn = icevision_read_rgb_image(), always_apply = FALSE, p = 0.5 )
icevision_FDA( reference_images, beta_limit = 0.1, read_fn = icevision_read_rgb_image(), always_apply = FALSE, p = 0.5 )
reference_images |
reference_images |
beta_limit |
beta_limit |
read_fn |
read_fn |
always_apply |
always_apply |
p |
p |
Simple "style transfer".
None
//github.com/YanchaoYang/FDA: Simple "style transfer".
image
uint8, float32
https://github.com/YanchaoYang/FDA https://openaccess.thecvf.com/content_CVPR_2020/papers/Yang_FDA_Fourier_Domain_Adaptation_for_Semantic_Segmentation_CVPR_2020_paper.pdf
>>> import numpy as np >>> import albumentations as A >>> image = np.random.randint(0, 256, [100, 100, 3], dtype=np.uint8) >>> target_image = np.random.randint(0, 256, [100, 100, 3], dtype=np.uint8) >>> aug = A.Compose([A.FDA([target_image], p=1, read_fn=lambda x: x)]) >>> result = aug(image=image)
Split 'ids' based on predefined splits.
icevision_FixedSplitter(splits)
icevision_FixedSplitter(splits)
splits |
The predefined splits. |
None
Flip the input either horizontally, vertically or both horizontally and vertically.
icevision_Flip(always_apply = FALSE, p = 0.5)
icevision_Flip(always_apply = FALSE, p = 0.5)
always_apply |
always_apply |
p |
p |
None
image, mask, bboxes, keypoints
uint8, float32
Take an input array where all values should lie in the range [0, 1.0], multiply them by 'max_value' and then
icevision_FromFloat( dtype = "uint16", max_value = NULL, always_apply = FALSE, p = 1 )
icevision_FromFloat( dtype = "uint16", max_value = NULL, always_apply = FALSE, p = 1 )
dtype |
dtype |
max_value |
max_value |
always_apply |
always_apply |
p |
p |
cast the resulted value to a type specified by 'dtype'. If 'max_value' is NULL the transform will try to infer the maximum value for the data type from the 'dtype' argument. This is the inverse transform for :class:'~albumentations.augmentations.transforms.ToFloat'.
None
image
float32
Blur the input image using a Gaussian filter with a random kernel size.
icevision_GaussianBlur( blur_limit = list(3, 7), sigma_limit = 0, always_apply = FALSE, p = 0.5 )
icevision_GaussianBlur( blur_limit = list(3, 7), sigma_limit = 0, always_apply = FALSE, p = 0.5 )
blur_limit |
blur_limit |
sigma_limit |
sigma_limit |
always_apply |
always_apply |
p |
p |
None
image
uint8, float32
Apply gaussian noise to the input image.
icevision_GaussNoise( var_limit = list(10, 50), mean = 0, always_apply = FALSE, p = 0.5 )
icevision_GaussNoise( var_limit = list(10, 50), mean = 0, always_apply = FALSE, p = 0.5 )
var_limit |
var_limit |
mean |
mean |
always_apply |
always_apply |
p |
p |
None
image
uint8, float32
Apply glass noise to the input image.
icevision_GlassBlur( sigma = 0.7, max_delta = 4, iterations = 2, always_apply = FALSE, mode = "fast", p = 0.5 )
icevision_GlassBlur( sigma = 0.7, max_delta = 4, iterations = 2, always_apply = FALSE, mode = "fast", p = 0.5 )
sigma |
sigma |
max_delta |
max_delta |
iterations |
iterations |
always_apply |
always_apply |
mode |
mode |
p |
p |
None
image
uint8, float32
| https://arxiv.org/abs/1903.12261 | https://github.com/hendrycks/robustness/blob/master/ImageNet-C/create_c/make_imagenet_c.py
Args:
icevision_GridDistortion( num_steps = 5, distort_limit = 0.3, interpolation = 1, border_mode = 4, value = NULL, mask_value = NULL, always_apply = FALSE, p = 0.5 )
icevision_GridDistortion( num_steps = 5, distort_limit = 0.3, interpolation = 1, border_mode = 4, value = NULL, mask_value = NULL, always_apply = FALSE, p = 0.5 )
num_steps |
num_steps |
distort_limit |
distort_limit |
interpolation |
interpolation |
border_mode |
border_mode |
value |
value |
mask_value |
mask_value |
always_apply |
always_apply |
p |
p |
num_steps (int): count of grid cells on each side. distort_limit (float, (float, float)): If distort_limit is a single float, the range will be (-distort_limit, distort_limit). Default: (-0.03, 0.03). interpolation (OpenCV flag): flag that is used to specify the interpolation algorithm. Should be one of: cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_LINEAR. border_mode (OpenCV flag): flag that is used to specify the pixel extrapolation method. Should be one of: cv2.BORDER_CONSTANT, cv2.BORDER_REPLICATE, cv2.BORDER_REFLECT, cv2.BORDER_WRAP, cv2.BORDER_REFLECT_101. Default: cv2.BORDER_REFLECT_101 value (int, float, list of ints, list of float): padding value if border_mode is cv2.BORDER_CONSTANT. mask_value (int, float, list of ints, list of float): padding value if border_mode is cv2.BORDER_CONSTANT applied for masks. Targets: image, mask Image types: uint8, float32
None
image, mask
uint8, float32
GridDropout, drops out rectangular regions of an image and the corresponding mask in a grid fashion.
icevision_GridDropout( ratio = 0.5, unit_size_min = NULL, unit_size_max = NULL, holes_number_x = NULL, holes_number_y = NULL, shift_x = 0, shift_y = 0, random_offset = FALSE, fill_value = 0, mask_fill_value = NULL, always_apply = FALSE, p = 0.5 )
icevision_GridDropout( ratio = 0.5, unit_size_min = NULL, unit_size_max = NULL, holes_number_x = NULL, holes_number_y = NULL, shift_x = 0, shift_y = 0, random_offset = FALSE, fill_value = 0, mask_fill_value = NULL, always_apply = FALSE, p = 0.5 )
ratio |
ratio |
unit_size_min |
unit_size_min |
unit_size_max |
unit_size_max |
holes_number_x |
holes_number_x |
holes_number_y |
holes_number_y |
shift_x |
shift_x |
shift_y |
shift_y |
random_offset |
random_offset |
fill_value |
fill_value |
mask_fill_value |
mask_fill_value |
always_apply |
always_apply |
p |
p |
None
image, mask
uint8, float32
https://arxiv.org/abs/2001.04086
Apply histogram matching. It manipulates the pixels of an input image so that its histogram matches
icevision_HistogramMatching( reference_images, blend_ratio = list(0.5, 1), read_fn = icevision_read_rgb_image(), always_apply = FALSE, p = 0.5 )
icevision_HistogramMatching( reference_images, blend_ratio = list(0.5, 1), read_fn = icevision_read_rgb_image(), always_apply = FALSE, p = 0.5 )
reference_images |
reference_images |
blend_ratio |
blend_ratio |
read_fn |
read_fn |
always_apply |
always_apply |
p |
p |
the histogram of the reference image. If the images have multiple channels, the matching is done independently for each channel, as long as the number of channels is equal in the input image and the reference. Histogram matching can be used as a lightweight normalisation for image processing, such as feature matching, especially in circumstances where the images have been taken from different sources or in different conditions (i.e. lighting). See: https://scikit-image.org/docs/dev/auto_examples/color_exposure/plot_histogram_matching.html
None
https://scikit-image.org/docs/dev/auto_examples/color_exposure/plot_histogram_matching.html
image
uint8, uint16, float32
Flip the input horizontally around the y-axis.
icevision_HorizontalFlip(always_apply = FALSE, p = 0.5)
icevision_HorizontalFlip(always_apply = FALSE, p = 0.5)
always_apply |
always_apply |
p |
p |
None
image, mask, bboxes, keypoints
uint8, float32
Randomly change hue, saturation and value of the input image.
icevision_HueSaturationValue( hue_shift_limit = 20, sat_shift_limit = 30, val_shift_limit = 20, always_apply = FALSE, p = 0.5 )
icevision_HueSaturationValue( hue_shift_limit = 20, sat_shift_limit = 30, val_shift_limit = 20, always_apply = FALSE, p = 0.5 )
hue_shift_limit |
hue_shift_limit |
sat_shift_limit |
sat_shift_limit |
val_shift_limit |
val_shift_limit |
always_apply |
always_apply |
p |
p |
None
image
uint8, float32
Add gaussian noise to the input image.
icevision_IAAAdditiveGaussianNoise( loc = 0, scale = list(2.55, 12.75), per_channel = FALSE, always_apply = FALSE, p = 0.5 )
icevision_IAAAdditiveGaussianNoise( loc = 0, scale = list(2.55, 12.75), per_channel = FALSE, always_apply = FALSE, p = 0.5 )
loc |
loc |
scale |
scale |
per_channel |
per_channel |
always_apply |
always_apply |
p |
p |
None
image
Place a regular grid of points on the input and randomly move the neighbourhood of these point around
icevision_IAAAffine( scale = 1, translate_percent = NULL, translate_px = NULL, rotate = 0, shear = 0, order = 1, cval = 0, mode = "reflect", always_apply = FALSE, p = 0.5 )
icevision_IAAAffine( scale = 1, translate_percent = NULL, translate_px = NULL, rotate = 0, shear = 0, order = 1, cval = 0, mode = "reflect", always_apply = FALSE, p = 0.5 )
scale |
scale |
translate_percent |
translate_percent |
translate_px |
translate_px |
rotate |
rotate |
shear |
shear |
order |
order |
cval |
cval |
mode |
mode |
always_apply |
always_apply |
p |
p |
via affine transformations. Note: This class introduce interpolation artifacts to mask if it has values other than (0;1)
None
None
image, mask
Transform for segmentation task.
icevision_IAACropAndPad( px = NULL, percent = NULL, pad_mode = "constant", pad_cval = 0, keep_size = TRUE, always_apply = FALSE, p = 1 )
icevision_IAACropAndPad( px = NULL, percent = NULL, pad_mode = "constant", pad_cval = 0, keep_size = TRUE, always_apply = FALSE, p = 1 )
px |
px |
percent |
percent |
pad_mode |
pad_mode |
pad_cval |
pad_cval |
keep_size |
keep_size |
always_apply |
always_apply |
p |
p |
Emboss the input image and overlays the result with the original image.
icevision_IAAEmboss( alpha = list(0.2, 0.5), strength = list(0.2, 0.7), always_apply = FALSE, p = 0.5 )
icevision_IAAEmboss( alpha = list(0.2, 0.5), strength = list(0.2, 0.7), always_apply = FALSE, p = 0.5 )
alpha |
alpha |
strength |
strength |
always_apply |
always_apply |
p |
p |
None
image
Transform for segmentation task.
icevision_IAAFliplr(always_apply = FALSE, p = 0.5)
icevision_IAAFliplr(always_apply = FALSE, p = 0.5)
always_apply |
always_apply |
p |
p |
None
Transform for segmentation task.
icevision_IAAFlipud(always_apply = FALSE, p = 0.5)
icevision_IAAFlipud(always_apply = FALSE, p = 0.5)
always_apply |
always_apply |
p |
p |
None
Perform a random four point perspective transform of the input.
icevision_IAAPerspective( scale = list(0.05, 0.1), keep_size = TRUE, always_apply = FALSE, p = 0.5 )
icevision_IAAPerspective( scale = list(0.05, 0.1), keep_size = TRUE, always_apply = FALSE, p = 0.5 )
scale |
scale |
keep_size |
keep_size |
always_apply |
always_apply |
p |
p |
Note: This class introduce interpolation artifacts to mask if it has values other than (0;1)
None
image, mask
Place a regular grid of points on the input and randomly move the neighbourhood of these point around
icevision_IAAPiecewiseAffine( scale = list(0.03, 0.05), nb_rows = 4, nb_cols = 4, order = 1, cval = 0, mode = "constant", always_apply = FALSE, p = 0.5 )
icevision_IAAPiecewiseAffine( scale = list(0.03, 0.05), nb_rows = 4, nb_cols = 4, order = 1, cval = 0, mode = "constant", always_apply = FALSE, p = 0.5 )
scale |
scale |
nb_rows |
nb_rows |
nb_cols |
nb_cols |
order |
order |
cval |
cval |
mode |
mode |
always_apply |
always_apply |
p |
p |
via affine transformations. Note: This class introduce interpolation artifacts to mask if it has values other than (0;1)
None
image, mask
Sharpen the input image and overlays the result with the original image.
icevision_IAASharpen( alpha = list(0.2, 0.5), lightness = list(0.5, 1), always_apply = FALSE, p = 0.5 )
icevision_IAASharpen( alpha = list(0.2, 0.5), lightness = list(0.5, 1), always_apply = FALSE, p = 0.5 )
alpha |
alpha |
lightness |
lightness |
always_apply |
always_apply |
p |
p |
None
image
Completely or partially transform the input image to its superpixel representation. Uses skimage's version
icevision_IAASuperpixels( p_replace = 0.1, n_segments = 100, always_apply = FALSE, p = 0.5 )
icevision_IAASuperpixels( p_replace = 0.1, n_segments = 100, always_apply = FALSE, p = 0.5 )
p_replace |
p_replace |
n_segments |
n_segments |
always_apply |
always_apply |
p |
p |
of the SLIC algorithm. May be slow.
None
image
Decrease Jpeg, WebP compression of an image.
icevision_ImageCompression( quality_lower = 99, quality_upper = 100, compression_type = 0, always_apply = FALSE, p = 0.5 )
icevision_ImageCompression( quality_lower = 99, quality_upper = 100, compression_type = 0, always_apply = FALSE, p = 0.5 )
quality_lower |
quality_lower |
quality_upper |
quality_upper |
compression_type |
compression_type |
always_apply |
always_apply |
p |
p |
None
image
uint8, float32
Transform applied to image only.
icevision_ImageOnlyIAATransform(always_apply = FALSE, p = 0.5)
icevision_ImageOnlyIAATransform(always_apply = FALSE, p = 0.5)
always_apply |
always_apply |
p |
p |
None
Transform applied to image only.
icevision_ImageOnlyTransform(always_apply = FALSE, p = 0.5)
icevision_ImageOnlyTransform(always_apply = FALSE, p = 0.5)
always_apply |
always_apply |
p |
p |
None
Invert the input image by subtracting pixel values from 255.
icevision_InvertImg(always_apply = FALSE, p = 0.5)
icevision_InvertImg(always_apply = FALSE, p = 0.5)
always_apply |
always_apply |
p |
p |
None
image
uint8
Apply camera sensor noise.
icevision_ISONoise( color_shift = list(0.01, 0.05), intensity = list(0.1, 0.5), always_apply = FALSE, p = 0.5 )
icevision_ISONoise( color_shift = list(0.01, 0.05), intensity = list(0.1, 0.5), always_apply = FALSE, p = 0.5 )
color_shift |
color_shift |
intensity |
intensity |
always_apply |
always_apply |
p |
p |
None
image
uint8
Decrease Jpeg compression of an image.
icevision_JpegCompression( quality_lower = 99, quality_upper = 100, always_apply = FALSE, p = 0.5 )
icevision_JpegCompression( quality_lower = 99, quality_upper = 100, always_apply = FALSE, p = 0.5 )
quality_lower |
quality_lower |
quality_upper |
quality_upper |
always_apply |
always_apply |
p |
p |
None
image
uint8, float32
Rescale an image so that maximum side is equal to max_size, keeping the aspect ratio of the initial image.
icevision_LongestMaxSize( max_size = 1024, interpolation = 1, always_apply = FALSE, p = 1 )
icevision_LongestMaxSize( max_size = 1024, interpolation = 1, always_apply = FALSE, p = 1 )
max_size |
max_size |
interpolation |
interpolation |
always_apply |
always_apply |
p |
p |
None
image, mask, bboxes, keypoints
uint8, float32
Image & mask augmentation that zero out mask and image regions corresponding
icevision_MaskDropout( max_objects = 1, image_fill_value = 0, mask_fill_value = 0, always_apply = FALSE, p = 0.5 )
icevision_MaskDropout( max_objects = 1, image_fill_value = 0, mask_fill_value = 0, always_apply = FALSE, p = 0.5 )
max_objects |
max_objects |
image_fill_value |
image_fill_value |
mask_fill_value |
mask_fill_value |
always_apply |
always_apply |
p |
p |
to randomly chosen object instance from mask. Mask must be single-channel image, zero values treated as background. Image can be any number of channels. Inspired by https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/114254
None
Blur the input image using a median filter with a random aperture linear size.
icevision_MedianBlur(blur_limit = 7, always_apply = FALSE, p = 0.5)
icevision_MedianBlur(blur_limit = 7, always_apply = FALSE, p = 0.5)
blur_limit |
blur_limit |
always_apply |
always_apply |
p |
p |
None
image
uint8, float32
Apply motion blur to the input image using a random-sized kernel.
icevision_MotionBlur(blur_limit = 7, always_apply = FALSE, p = 0.5)
icevision_MotionBlur(blur_limit = 7, always_apply = FALSE, p = 0.5)
blur_limit |
blur_limit |
always_apply |
always_apply |
p |
p |
None
image
uint8, float32
Multiply image to random number or array of numbers.
icevision_MultiplicativeNoise( multiplier = list(0.9, 1.1), per_channel = FALSE, elementwise = FALSE, always_apply = FALSE, p = 0.5 )
icevision_MultiplicativeNoise( multiplier = list(0.9, 1.1), per_channel = FALSE, elementwise = FALSE, always_apply = FALSE, p = 0.5 )
multiplier |
multiplier |
per_channel |
per_channel |
elementwise |
elementwise |
always_apply |
always_apply |
p |
p |
None
image
Any
Divide pixel values by 255 = 2**8 - 1, subtract mean per channel and divide by std per channel.
icevision_Normalize( mean = list(0.485, 0.456, 0.406), std = list(0.229, 0.224, 0.225), max_pixel_value = 255, always_apply = FALSE, p = 1 )
icevision_Normalize( mean = list(0.485, 0.456, 0.406), std = list(0.229, 0.224, 0.225), max_pixel_value = 255, always_apply = FALSE, p = 1 )
mean |
mean |
std |
std |
max_pixel_value |
max_pixel_value |
always_apply |
always_apply |
p |
p |
None
image
uint8, float32
OpticalDistortion
icevision_OpticalDistortion( distort_limit = 0.05, shift_limit = 0.05, interpolation = 1, border_mode = 4, value = NULL, mask_value = NULL, always_apply = FALSE, p = 0.5 )
icevision_OpticalDistortion( distort_limit = 0.05, shift_limit = 0.05, interpolation = 1, border_mode = 4, value = NULL, mask_value = NULL, always_apply = FALSE, p = 0.5 )
distort_limit |
distort_limit |
shift_limit |
shift_limit |
interpolation |
interpolation |
border_mode |
border_mode |
value |
value |
mask_value |
mask_value |
always_apply |
always_apply |
p |
p |
distort_limit (float, (float, float)): If distort_limit is a single float, the range will be (-distort_limit, distort_limit). Default: (-0.05, 0.05). shift_limit (float, (float, float))): If shift_limit is a single float, the range will be (-shift_limit, shift_limit). Default: (-0.05, 0.05). interpolation (OpenCV flag): flag that is used to specify the interpolation algorithm. Should be one of: cv2.INTER_NEAREST, cv2.INTER_LINEAR, cv2.INTER_CUBIC, cv2.INTER_AREA, cv2.INTER_LANCZOS4. Default: cv2.INTER_LINEAR. border_mode (OpenCV flag): flag that is used to specify the pixel extrapolation method. Should be one of: cv2.BORDER_CONSTANT, cv2.BORDER_REPLICATE, cv2.BORDER_REFLECT, cv2.BORDER_WRAP, cv2.BORDER_REFLECT_101. Default: cv2.BORDER_REFLECT_101 value (int, float, list of ints, list of float): padding value if border_mode is cv2.BORDER_CONSTANT. mask_value (int, float, list of ints, list of float): padding value if border_mode is cv2.BORDER_CONSTANT applied for masks. Targets: image, mask Image types: uint8, float32
None
image, mask
uint8, float32
Pad side of the image / max if side is less than desired number.
icevision_PadIfNeeded( min_height = 1024, min_width = 1024, pad_height_divisor = NULL, pad_width_divisor = NULL, border_mode = 4, value = NULL, mask_value = NULL, always_apply = FALSE, p = 1 )
icevision_PadIfNeeded( min_height = 1024, min_width = 1024, pad_height_divisor = NULL, pad_width_divisor = NULL, border_mode = 4, value = NULL, mask_value = NULL, always_apply = FALSE, p = 1 )
min_height |
min_height |
min_width |
min_width |
pad_height_divisor |
pad_height_divisor |
pad_width_divisor |
pad_width_divisor |
border_mode |
border_mode |
value |
value |
mask_value |
mask_value |
always_apply |
always_apply |
p |
p |
image, mask, bbox, keypoints
uint8, float32
Loops through all data points parsing the required fields.
icevision_parse( data_splitter = NULL, idmap = NULL, autofix = TRUE, show_pbar = TRUE, cache_filepath = NULL )
icevision_parse( data_splitter = NULL, idmap = NULL, autofix = TRUE, show_pbar = TRUE, cache_filepath = NULL )
data_splitter |
How to split the parsed data, defaults to a [0.8, 0.2] random split. |
idmap |
Maps from filenames to unique ids, pass an 'IDMap()' if you need this information. |
autofix |
autofix |
show_pbar |
Whether or not to show a progress bar while parsing the data. |
cache_filepath |
Path to save records in pickle format. Defaults to NULL, e.g. if the user does not specify a path, no saving nor loading happens. |
A list of records for each split defined by data_splitter.
Reduce the number of bits for each color channel.
icevision_Posterize(num_bits = 4, always_apply = FALSE, p = 0.5)
icevision_Posterize(num_bits = 4, always_apply = FALSE, p = 0.5)
num_bits |
num_bits |
always_apply |
always_apply |
p |
p |
None
image
uint8
Randomly change brightness and contrast of the input image.
icevision_RandomBrightnessContrast( brightness_limit = 0.2, contrast_limit = 0.2, brightness_by_max = TRUE, always_apply = FALSE, p = 0.5 )
icevision_RandomBrightnessContrast( brightness_limit = 0.2, contrast_limit = 0.2, brightness_by_max = TRUE, always_apply = FALSE, p = 0.5 )
brightness_limit |
brightness_limit |
contrast_limit |
contrast_limit |
brightness_by_max |
brightness_by_max |
always_apply |
always_apply |
p |
p |
None
image
uint8, float32
Randomly change contrast of the input image.
icevision_RandomContrast(limit = 0.2, always_apply = FALSE, p = 0.5)
icevision_RandomContrast(limit = 0.2, always_apply = FALSE, p = 0.5)
limit |
limit |
always_apply |
always_apply |
p |
p |
None
image
uint8, float32
Crop a random part of the input.
icevision_RandomCrop(height, width, always_apply = FALSE, p = 1)
icevision_RandomCrop(height, width, always_apply = FALSE, p = 1)
height |
height |
width |
width |
always_apply |
always_apply |
p |
p |
None
image, mask, bboxes, keypoints
uint8, float32
Crop bbox from image with random shift by x,y coordinates
icevision_RandomCropNearBBox(max_part_shift = 0.3, always_apply = FALSE, p = 1)
icevision_RandomCropNearBBox(max_part_shift = 0.3, always_apply = FALSE, p = 1)
max_part_shift |
max_part_shift |
always_apply |
always_apply |
p |
p |
None
image, mask, bboxes, keypoints
uint8, float32
Simulates fog for the image
icevision_RandomFog( fog_coef_lower = 0.3, fog_coef_upper = 1, alpha_coef = 0.08, always_apply = FALSE, p = 0.5 )
icevision_RandomFog( fog_coef_lower = 0.3, fog_coef_upper = 1, alpha_coef = 0.08, always_apply = FALSE, p = 0.5 )
fog_coef_lower |
fog_coef_lower |
fog_coef_upper |
fog_coef_upper |
alpha_coef |
alpha_coef |
always_apply |
always_apply |
p |
p |
From https://github.com/UjjwalSaxena/Automold–Road-Augmentation-Library
None
image
uint8, float32
RandomGamma
icevision_RandomGamma( gamma_limit = list(80, 120), eps = NULL, always_apply = FALSE, p = 0.5 )
icevision_RandomGamma( gamma_limit = list(80, 120), eps = NULL, always_apply = FALSE, p = 0.5 )
gamma_limit |
gamma_limit |
eps |
Deprecated. |
always_apply |
always_apply |
p |
p |
gamma_limit (float or (float, float)): If gamma_limit is a single float value, the range will be (-gamma_limit, gamma_limit). Default: (80, 120). eps: Deprecated. Targets: image Image types: uint8, float32
None
image
uint8, float32
Random shuffle grid's cells on image.
icevision_RandomGridShuffle(grid = list(3, 3), always_apply = FALSE, p = 0.5)
icevision_RandomGridShuffle(grid = list(3, 3), always_apply = FALSE, p = 0.5)
grid |
grid |
always_apply |
always_apply |
p |
p |
None
image, mask
uint8, float32
Adds rain effects.
icevision_RandomRain( slant_lower = -10, slant_upper = 10, drop_length = 20, drop_width = 1, drop_color = list(200, 200, 200), blur_value = 7, brightness_coefficient = 0.7, rain_type = NULL, always_apply = FALSE, p = 0.5 )
icevision_RandomRain( slant_lower = -10, slant_upper = 10, drop_length = 20, drop_width = 1, drop_color = list(200, 200, 200), blur_value = 7, brightness_coefficient = 0.7, rain_type = NULL, always_apply = FALSE, p = 0.5 )
slant_lower |
should be in range [-20, 20]. |
slant_upper |
should be in range [-20, 20]. |
drop_length |
should be in range [0, 100]. |
drop_width |
should be in range [1, 5]. drop_color (list of (r, g, b)): rain lines color. blur_value (int): rainy view are blurry brightness_coefficient (float): rainy days are usually shady. Should be in range [0, 1]. |
drop_color |
drop_color |
blur_value |
blur_value |
brightness_coefficient |
brightness_coefficient |
rain_type |
One of [NULL, "drizzle", "heavy", "torrestial"] |
always_apply |
always_apply |
p |
p |
From https://github.com/UjjwalSaxena/Automold–Road-Augmentation-Library
None
image
uint8, float32
Torchvision's variant of crop a random part of the input and rescale it to some size.
icevision_RandomResizedCrop( height, width, scale = list(0.08, 1), ratio = list(0.75, 1.33333333333333), interpolation = 1, always_apply = FALSE, p = 1 )
icevision_RandomResizedCrop( height, width, scale = list(0.08, 1), ratio = list(0.75, 1.33333333333333), interpolation = 1, always_apply = FALSE, p = 1 )
height |
height |
width |
width |
scale |
scale |
ratio |
ratio |
interpolation |
interpolation |
always_apply |
always_apply |
p |
p |
None
image, mask, bboxes, keypoints
uint8, float32
Randomly rotate the input by 90 degrees zero or more times.
icevision_RandomRotate90(always_apply = FALSE, p = 0.5)
icevision_RandomRotate90(always_apply = FALSE, p = 0.5)
always_apply |
always_apply |
p |
p |
None
image, mask, bboxes, keypoints
uint8, float32
Randomly resize the input. Output image size is different from the input image size.
icevision_RandomScale( scale_limit = 0.1, interpolation = 1L, always_apply = FALSE, p = 0.5 )
icevision_RandomScale( scale_limit = 0.1, interpolation = 1L, always_apply = FALSE, p = 0.5 )
scale_limit |
scale_limit |
interpolation |
interpolation |
always_apply |
always_apply |
p |
p |
None
image, mask, bboxes, keypoints
uint8, float32
Simulates shadows for the image
icevision_RandomShadow( shadow_roi = list(0, 0.5, 1, 1), num_shadows_lower = 1, num_shadows_upper = 2, shadow_dimension = 5, always_apply = FALSE, p = 0.5 )
icevision_RandomShadow( shadow_roi = list(0, 0.5, 1, 1), num_shadows_lower = 1, num_shadows_upper = 2, shadow_dimension = 5, always_apply = FALSE, p = 0.5 )
shadow_roi |
shadow_roi |
num_shadows_lower |
num_shadows_lower |
num_shadows_upper |
num_shadows_upper |
shadow_dimension |
shadow_dimension |
always_apply |
always_apply |
p |
p |
From https://github.com/UjjwalSaxena/Automold–Road-Augmentation-Library
None
image
uint8, float32
Crop a random part of the input and rescale it to some size without loss of bboxes.
Crop a random part of the input and rescale it to some size without loss of bboxes.
icevision_RandomSizedBBoxSafeCrop( height, width, erosion_rate = 0, interpolation = 1, always_apply = FALSE, p = 1 ) icevision_RandomSizedBBoxSafeCrop( height, width, erosion_rate = 0, interpolation = 1, always_apply = FALSE, p = 1 )
icevision_RandomSizedBBoxSafeCrop( height, width, erosion_rate = 0, interpolation = 1, always_apply = FALSE, p = 1 ) icevision_RandomSizedBBoxSafeCrop( height, width, erosion_rate = 0, interpolation = 1, always_apply = FALSE, p = 1 )
height |
height |
width |
width |
erosion_rate |
erosion_rate |
interpolation |
interpolation |
always_apply |
always_apply |
p |
p |
None
None
image, mask, bboxes
image, mask, bboxes
uint8, float32
uint8, float32
Crop a random part of the input and rescale it to some size.
icevision_RandomSizedCrop( min_max_height, height, width, w2h_ratio = 1, interpolation = 1, always_apply = FALSE, p = 1 )
icevision_RandomSizedCrop( min_max_height, height, width, w2h_ratio = 1, interpolation = 1, always_apply = FALSE, p = 1 )
min_max_height |
min_max_height |
height |
height |
width |
width |
w2h_ratio |
w2h_ratio |
interpolation |
interpolation |
always_apply |
always_apply |
p |
p |
None
image, mask, bboxes, keypoints
uint8, float32
Bleach out some pixel values simulating snow.
icevision_RandomSnow( snow_point_lower = 0.1, snow_point_upper = 0.3, brightness_coeff = 2.5, always_apply = FALSE, p = 0.5 )
icevision_RandomSnow( snow_point_lower = 0.1, snow_point_upper = 0.3, brightness_coeff = 2.5, always_apply = FALSE, p = 0.5 )
snow_point_lower |
snow_point_lower |
snow_point_upper |
snow_point_upper |
brightness_coeff |
brightness_coeff |
always_apply |
always_apply |
p |
p |
From https://github.com/UjjwalSaxena/Automold–Road-Augmentation-Library
None
image
uint8, float32
Randomly splits items.
icevision_RandomSplitter(probs, seed = NULL)
icevision_RandomSplitter(probs, seed = NULL)
probs |
'Sequence' of probabilities that must sum to one. The length of the 'Sequence' is the number of groups to to split the items into. |
seed |
Internal seed used for shuffling the items. Define this if you need reproducible results. |
None
Simulates Sun Flare for the image
icevision_RandomSunFlare( flare_roi = list(0, 0, 1, 0.5), angle_lower = 0, angle_upper = 1, num_flare_circles_lower = 6, num_flare_circles_upper = 10, src_radius = 400, src_color = list(255, 255, 255), always_apply = FALSE, p = 0.5 )
icevision_RandomSunFlare( flare_roi = list(0, 0, 1, 0.5), angle_lower = 0, angle_upper = 1, num_flare_circles_lower = 6, num_flare_circles_upper = 10, src_radius = 400, src_color = list(255, 255, 255), always_apply = FALSE, p = 0.5 )
flare_roi |
flare_roi |
angle_lower |
angle_lower |
angle_upper |
angle_upper |
num_flare_circles_lower |
num_flare_circles_lower |
num_flare_circles_upper |
num_flare_circles_upper |
src_radius |
src_radius |
src_color |
src_color |
always_apply |
always_apply |
p |
p |
From https://github.com/UjjwalSaxena/Automold–Road-Augmentation-Library
None
image
uint8, float32
Read_bgr_image
icevision_read_bgr_image(path)
icevision_read_bgr_image(path)
path |
path |
None
Read_rgb_image
icevision_read_rgb_image(path)
icevision_read_rgb_image(path)
path |
path |
None
Resize the input to the given height and width.
icevision_Resize(height, width, interpolation = 1, always_apply = FALSE, p = 1)
icevision_Resize(height, width, interpolation = 1, always_apply = FALSE, p = 1)
height |
height |
width |
width |
interpolation |
interpolation |
always_apply |
always_apply |
p |
p |
None
image, mask, bboxes, keypoints
uint8, float32
Resize_and_pad
icevision_resize_and_pad( size, pad = partial(icevision_PadIfNeeded, border_mode = 0, value = c(124L, 116L, 104L)) )
icevision_resize_and_pad( size, pad = partial(icevision_PadIfNeeded, border_mode = 0, value = c(124L, 116L, 104L)) )
size |
size |
pad |
pad |
None
Randomly shift values for each channel of the input RGB image.
Randomly shift values for each channel of the input RGB image.
icevision_RGBShift( r_shift_limit = 20, g_shift_limit = 20, b_shift_limit = 20, always_apply = FALSE, p = 0.5 ) icevision_RGBShift( r_shift_limit = 20, g_shift_limit = 20, b_shift_limit = 20, always_apply = FALSE, p = 0.5 )
icevision_RGBShift( r_shift_limit = 20, g_shift_limit = 20, b_shift_limit = 20, always_apply = FALSE, p = 0.5 ) icevision_RGBShift( r_shift_limit = 20, g_shift_limit = 20, b_shift_limit = 20, always_apply = FALSE, p = 0.5 )
r_shift_limit |
r_shift_limit |
g_shift_limit |
g_shift_limit |
b_shift_limit |
b_shift_limit |
always_apply |
always_apply |
p |
p |
None
None
image
image
uint8, float32
uint8, float32
Rotate the input by an angle selected randomly from the uniform distribution.
icevision_Rotate( limit = 90, interpolation = 1, border_mode = 4, value = NULL, mask_value = NULL, always_apply = FALSE, p = 0.5 )
icevision_Rotate( limit = 90, interpolation = 1, border_mode = 4, value = NULL, mask_value = NULL, always_apply = FALSE, p = 0.5 )
limit |
limit |
interpolation |
interpolation |
border_mode |
border_mode |
value |
value |
mask_value |
mask_value |
always_apply |
always_apply |
p |
p |
None
image, mask, bboxes, keypoints
uint8, float32
Randomly apply affine transforms: translate, scale and rotate the input.
Randomly apply affine transforms: translate, scale and rotate the input.
icevision_ShiftScaleRotate( shift_limit = 0.0625, scale_limit = 0.1, rotate_limit = 45, interpolation = 1, border_mode = 4, value = NULL, mask_value = NULL, shift_limit_x = NULL, shift_limit_y = NULL, always_apply = FALSE, p = 0.5 ) icevision_ShiftScaleRotate( shift_limit = 0.0625, scale_limit = 0.1, rotate_limit = 45, interpolation = 1, border_mode = 4, value = NULL, mask_value = NULL, shift_limit_x = NULL, shift_limit_y = NULL, always_apply = FALSE, p = 0.5 )
icevision_ShiftScaleRotate( shift_limit = 0.0625, scale_limit = 0.1, rotate_limit = 45, interpolation = 1, border_mode = 4, value = NULL, mask_value = NULL, shift_limit_x = NULL, shift_limit_y = NULL, always_apply = FALSE, p = 0.5 ) icevision_ShiftScaleRotate( shift_limit = 0.0625, scale_limit = 0.1, rotate_limit = 45, interpolation = 1, border_mode = 4, value = NULL, mask_value = NULL, shift_limit_x = NULL, shift_limit_y = NULL, always_apply = FALSE, p = 0.5 )
shift_limit |
shift_limit |
scale_limit |
scale_limit |
rotate_limit |
rotate_limit |
interpolation |
interpolation |
border_mode |
border_mode |
value |
value |
mask_value |
mask_value |
shift_limit_x |
shift_limit_x |
shift_limit_y |
shift_limit_y |
always_apply |
always_apply |
p |
p |
None
None
image, mask, keypoints
image, mask, keypoints
uint8, float32
uint8, float32
SingleSplitSplitter
icevision_SingleSplitSplitter(...)
icevision_SingleSplitSplitter(...)
... |
arguments to pass |
all items in a single group, without shuffling.
Rescale an image so that minimum side is equal to max_size, keeping the aspect ratio of the initial image.
icevision_SmallestMaxSize( max_size = 1024, interpolation = 1, always_apply = FALSE, p = 1 )
icevision_SmallestMaxSize( max_size = 1024, interpolation = 1, always_apply = FALSE, p = 1 )
max_size |
max_size |
interpolation |
interpolation |
always_apply |
always_apply |
p |
p |
None
image, mask, bboxes, keypoints
uint8, float32
Invert all pixel values above a threshold.
icevision_Solarize(threshold = 128, always_apply = FALSE, p = 0.5)
icevision_Solarize(threshold = 128, always_apply = FALSE, p = 0.5)
threshold |
threshold |
always_apply |
always_apply |
p |
p |
None
image
any
Divide pixel values by 'max_value' to get a float32 output array where all values lie in the range [0, 1.0].
icevision_ToFloat(max_value = NULL, always_apply = FALSE, p = 1)
icevision_ToFloat(max_value = NULL, always_apply = FALSE, p = 1)
max_value |
max_value |
always_apply |
always_apply |
p |
p |
If 'max_value' is NULL the transform will try to infer the maximum value by inspecting the data type of the input image. See Also: :class:'~albumentations.augmentations.transforms.FromFloat'
None
:class:'~albumentations.augmentations.transforms.FromFloat'
image
any type
Convert the input RGB image to grayscale. If the mean pixel value for the resulting image is greater than 127, invert the resulting grayscale image.
icevision_ToGray(always_apply = FALSE, p = 0.5)
icevision_ToGray(always_apply = FALSE, p = 0.5)
always_apply |
always_apply |
p |
p |
None
image
uint8, float32
Applies sepia filter to the input RGB image
icevision_ToSepia(always_apply = FALSE, p = 0.5)
icevision_ToSepia(always_apply = FALSE, p = 0.5)
always_apply |
always_apply |
p |
p |
None
image
uint8, float32
Transpose the input by swapping rows and columns.
icevision_Transpose(always_apply = FALSE, p = 0.5)
icevision_Transpose(always_apply = FALSE, p = 0.5)
always_apply |
always_apply |
p |
p |
None
image, mask, bboxes, keypoints
uint8, float32
Flip the input vertically around the x-axis.
icevision_VerticalFlip(always_apply = FALSE, p = 0.5)
icevision_VerticalFlip(always_apply = FALSE, p = 0.5)
always_apply |
always_apply |
p |
p |
None
image, mask, bboxes, keypoints
uint8, float32
ICNR init of 'x', with 'scale' and 'init' function
icnr_init(x, scale = 2, init = nn()$init$kaiming_normal_)
icnr_init(x, scale = 2, init = nn()$init$kaiming_normal_)
x |
tensor |
scale |
int, scale |
init |
initializer |
None
Works like a dictionary that automatically assign values for new keys.
IDMap(initial_names = NULL)
IDMap(initial_names = NULL)
initial_names |
initial_names |
None
Open an 'Image' from path 'fn'
Image_create(fn)
Image_create(fn)
fn |
file name |
None
Opens and identifies the given image file.
Image_open(fp, mode = "r")
Image_open(fp, mode = "r")
fp |
fp |
mode |
mode |
None
Returns a resized copy of this image.
Image_resize(img, size, resample = 3, box = NULL, reducing_gap = NULL)
Image_resize(img, size, resample = 3, box = NULL, reducing_gap = NULL)
img |
image |
size |
size |
resample |
resample |
box |
box |
reducing_gap |
reducing_gap |
None
Transform image to byte tensor in 'c*h*w' dim order.
image2tensor(img)
image2tensor(img)
img |
image |
None
A 'TransformBlock' for images of 'cls'
ImageBlock(...)
ImageBlock(...)
... |
parameters to pass |
block
Open an 'Image' from path 'fn'
ImageBW_create(fn)
ImageBW_create(fn)
fn |
file name |
None
Create from 'path/csv_fname' using 'fn_col' and 'label_col'
ImageDataLoaders_from_csv( path, csv_fname = "labels.csv", header = "infer", delimiter = NULL, valid_pct = 0.2, seed = NULL, fn_col = 0, folder = NULL, suff = "", label_col = 1, label_delim = NULL, y_block = NULL, valid_col = NULL, item_tfms = NULL, batch_tfms = NULL, bs = 64, val_bs = NULL, size = NULL, shuffle_train = TRUE, device = NULL, ... )
ImageDataLoaders_from_csv( path, csv_fname = "labels.csv", header = "infer", delimiter = NULL, valid_pct = 0.2, seed = NULL, fn_col = 0, folder = NULL, suff = "", label_col = 1, label_delim = NULL, y_block = NULL, valid_col = NULL, item_tfms = NULL, batch_tfms = NULL, bs = 64, val_bs = NULL, size = NULL, shuffle_train = TRUE, device = NULL, ... )
path |
The folder where to work |
csv_fname |
csv file name |
header |
header |
delimiter |
delimiter |
valid_pct |
validation percentage |
seed |
random seed |
fn_col |
column name |
folder |
folder name |
suff |
suff |
label_col |
label column |
label_delim |
label delimiter |
y_block |
y_block |
valid_col |
validation column |
item_tfms |
One or several transforms applied to the items before batching them |
batch_tfms |
One or several transforms applied to the batches once they are formed |
bs |
batch size |
val_bs |
The batch size for the validation DataLoader (defaults to bs) |
size |
image size |
shuffle_train |
If we shuffle the training DataLoader or not |
device |
device name |
... |
additional parameters to pass |
None
Create a dataloaders from a given 'dblock'
ImageDataLoaders_from_dblock( dblock, source, path = ".", bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL, ... )
ImageDataLoaders_from_dblock( dblock, source, path = ".", bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL, ... )
dblock |
dblock |
source |
source folder |
path |
The folder where to work |
bs |
batch size |
val_bs |
The batch size for the validation DataLoader (defaults to bs) |
shuffle_train |
If we shuffle the training DataLoader or not |
device |
device name |
... |
additional parameters to pass |
None
Create from 'df' using 'fn_col' and 'label_col'
ImageDataLoaders_from_df( df, path = ".", valid_pct = 0.2, seed = NULL, fn_col = 0, folder = NULL, suff = "", label_col = 1, label_delim = NULL, y_block = NULL, valid_col = NULL, item_tfms = NULL, batch_tfms = NULL, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL, ... )
ImageDataLoaders_from_df( df, path = ".", valid_pct = 0.2, seed = NULL, fn_col = 0, folder = NULL, suff = "", label_col = 1, label_delim = NULL, y_block = NULL, valid_col = NULL, item_tfms = NULL, batch_tfms = NULL, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL, ... )
df |
data frame |
path |
The folder where to work |
valid_pct |
validation percentage |
seed |
random seed |
fn_col |
column name |
folder |
folder name |
suff |
suff |
label_col |
label column |
label_delim |
label separator |
y_block |
y_block |
valid_col |
validation column |
item_tfms |
One or several transforms applied to the items before batching them |
batch_tfms |
One or several transforms applied to the batches once they are formed |
bs |
batch size |
val_bs |
The batch size for the validation DataLoader (defaults to bs) |
shuffle_train |
shuffle_train |
device |
device |
... |
additional parameters to pass |
None
Create from imagenet style dataset in 'path' with 'train' and 'valid' subfolders (or provide 'valid_pct')
ImageDataLoaders_from_folder( path, train = "train", valid = "valid", valid_pct = NULL, seed = NULL, vocab = NULL, item_tfms = NULL, batch_tfms = NULL, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL, size = NULL, ... )
ImageDataLoaders_from_folder( path, train = "train", valid = "valid", valid_pct = NULL, seed = NULL, vocab = NULL, item_tfms = NULL, batch_tfms = NULL, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL, size = NULL, ... )
path |
The folder where to work |
train |
train data |
valid |
validation data |
valid_pct |
validion percentage |
seed |
random seed |
vocab |
vocabulary |
item_tfms |
One or several transforms applied to the items before batching them |
batch_tfms |
One or several transforms applied to the batches once they are formed |
bs |
batch size |
val_bs |
The batch size for the validation DataLoader (defaults to bs) |
shuffle_train |
If we shuffle the training DataLoader or not |
device |
device name |
size |
image size |
... |
additional parameters to pass |
Create from list of 'fnames' and 'labels' in 'path'
ImageDataLoaders_from_lists( path, fnames, labels, valid_pct = 0.2, seed = NULL, y_block = NULL, item_tfms = NULL, batch_tfms = NULL, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL, ... )
ImageDataLoaders_from_lists( path, fnames, labels, valid_pct = 0.2, seed = NULL, y_block = NULL, item_tfms = NULL, batch_tfms = NULL, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL, ... )
path |
The folder where to work |
fnames |
file names |
labels |
labels |
valid_pct |
validation percentage |
seed |
random seed |
y_block |
y_block |
item_tfms |
One or several transforms applied to the items before batching them |
batch_tfms |
One or several transforms applied to the batches once they are formed |
bs |
batch size |
val_bs |
The batch size for the validation DataLoader (defaults to bs) |
shuffle_train |
If we shuffle the training DataLoader or not |
device |
device name |
... |
additional parameters to pass |
None
Create from the name attrs of 'fnames' in 'path's with re expression 'pat'
ImageDataLoaders_from_name_re( path, fnames, pat, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL, item_tfms = NULL, batch_tfms = NULL, ... )
ImageDataLoaders_from_name_re( path, fnames, pat, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL, item_tfms = NULL, batch_tfms = NULL, ... )
path |
The folder where to work |
fnames |
folder names |
pat |
an argument that requires regex |
bs |
The batch size |
val_bs |
The batch size for the validation DataLoader (defaults to bs) |
shuffle_train |
If we shuffle the training DataLoader or not |
device |
device name |
item_tfms |
One or several transforms applied to the items before batching them |
batch_tfms |
One or several transforms applied to the batches once they are formed |
... |
additional parameters to pass |
None
## Not run: URLs_PETS() path = 'oxford-iiit-pet' dls = ImageDataLoaders_from_name_re( path, fnames, pat='(.+)_\\d+.jpg$', item_tfms = RandomResizedCrop(460, min_scale=0.75), bs = 10, batch_tfms = list(aug_transforms(size = 299, max_warp = 0), Normalize_from_stats( imagenet_stats() ) ), device = 'cuda' ) ## End(Not run)
## Not run: URLs_PETS() path = 'oxford-iiit-pet' dls = ImageDataLoaders_from_name_re( path, fnames, pat='(.+)_\\d+.jpg$', item_tfms = RandomResizedCrop(460, min_scale=0.75), bs = 10, batch_tfms = list(aug_transforms(size = 299, max_warp = 0), Normalize_from_stats( imagenet_stats() ) ), device = 'cuda' ) ## End(Not run)
Create from list of 'fnames' in 'path's with 'label_func'
ImageDataLoaders_from_path_func( path, fnames, label_func, valid_pct = 0.2, seed = NULL, item_tfms = NULL, batch_tfms = NULL, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL, ... )
ImageDataLoaders_from_path_func( path, fnames, label_func, valid_pct = 0.2, seed = NULL, item_tfms = NULL, batch_tfms = NULL, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL, ... )
path |
The folder where to work |
fnames |
file names |
label_func |
label function |
valid_pct |
The random percentage of the dataset to set aside for validation (with an optional seed) |
seed |
random seed |
item_tfms |
One or several transforms applied to the items before batching them |
batch_tfms |
One or several transforms applied to the batches once they are formed |
bs |
batch size |
val_bs |
The batch size for the validation DataLoader (defaults to bs) |
shuffle_train |
If we shuffle the training DataLoader or not |
device |
device name |
... |
additional parameters to pass |
None
Create from list of 'fnames' in 'path's with re expression 'pat'
ImageDataLoaders_from_path_re( path, fnames, pat, valid_pct = 0.2, seed = NULL, item_tfms = NULL, batch_tfms = NULL, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL, ... )
ImageDataLoaders_from_path_re( path, fnames, pat, valid_pct = 0.2, seed = NULL, item_tfms = NULL, batch_tfms = NULL, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL, ... )
path |
The folder where to work |
fnames |
file names |
pat |
an argument that requires regex |
valid_pct |
The random percentage of the dataset to set aside for validation (with an optional seed) |
seed |
random seed |
item_tfms |
One or several transforms applied to the items before batching them |
batch_tfms |
One or several transforms applied to the batches once they are formed |
bs |
batch size |
val_bs |
The batch size for the validation DataLoader (defaults to bs) |
shuffle_train |
If we shuffle the training DataLoader or not |
device |
device name |
... |
additional parameters to pass |
None
Imagenet statistics
imagenet_stats()
imagenet_stats()
vector
## Not run: imagenet_stats() ## End(Not run)
## Not run: imagenet_stats() ## End(Not run)
Return the shape of the first weight layer in 'm'.
in_channels(m)
in_channels(m)
m |
parameters |
None
The inception Module from ‘ni' inputs to len(’kss')*'nb_filters'+'bottleneck_size'
InceptionModule( ni, nb_filters = 32, kss = c(39, 19, 9), bottleneck_size = 32, stride = 1 )
InceptionModule( ni, nb_filters = 32, kss = c(39, 19, 9), bottleneck_size = 32, stride = 1 )
ni |
number of input channels |
nb_filters |
the number of filters |
kss |
kernel size |
bottleneck_size |
bottleneck size |
stride |
stride |
module
Split 'items' so that 'val_idx' are in the validation set and the others in the training set
IndexSplitter(valid_idx)
IndexSplitter(valid_idx)
valid_idx |
The indices to use for the validation set (defaults to a random split otherwise) |
None
Initialize a wandb Run.
init(...)
init(...)
... |
parameters to pass |
wandb Run object
None
//docs.wandb.com/library/init
Initialize 'm' weights with 'func' and set 'bias' to 0.
init_default(m, func = nn()$init$kaiming_normal_)
init_default(m, func = nn()$init$kaiming_normal_)
m |
parameters |
func |
function |
None
Init_linear
init_linear(m, act_func = NULL, init = "auto", bias_std = 0.01)
init_linear(m, act_func = NULL, init = "auto", bias_std = 0.01)
m |
parameter |
act_func |
activation function |
init |
initializer |
bias_std |
bias standard deviation |
None
Install fastai
install_fastai( version, gpu = FALSE, cuda_version = "11.8", overwrite = FALSE, extra_pkgs = c("timm", "fastinference[interp]"), TPU = FALSE )
install_fastai( version, gpu = FALSE, cuda_version = "11.8", overwrite = FALSE, extra_pkgs = c("timm", "fastinference[interp]"), TPU = FALSE )
version |
specify version |
gpu |
installation of gpu |
cuda_version |
if gpu true, then cuda version is required. By default it is 11.6 |
overwrite |
will install all the dependencies |
extra_pkgs |
character vector of additional packages |
TPU |
official way to install Pytorch-XLA 1.13 |
None
InstanceNorm layer with 'nf' features and 'ndim' initialized depending on 'norm_type'.
InstanceNorm( nf, ndim = 2, norm_type = 5, affine = TRUE, eps = 1e-05, momentum = 0.1, track_running_stats = FALSE )
InstanceNorm( nf, ndim = 2, norm_type = 5, affine = TRUE, eps = 1e-05, momentum = 0.1, track_running_stats = FALSE )
nf |
input shape |
ndim |
dimension number |
norm_type |
normalization type |
affine |
affine |
eps |
epsilon |
momentum |
momentum |
track_running_stats |
track running statistics |
None
Transform image to float tensor, optionally dividing by 255 (e.g. for images).
IntToFloatTensor(div = 255, div_mask = 1)
IntToFloatTensor(div = 255, div_mask = 1)
div |
divide value |
div_mask |
divide mask |
None
Invisible Tensor
InvisibleTensor(x)
InvisibleTensor(x)
x |
tensor |
None
Is Rmarkdown?
is_rmarkdown()
is_rmarkdown()
logical True/False
Jaccard score for single-label classification problems
Jaccard( axis = -1, labels = NULL, pos_label = 1, average = "binary", sample_weight = NULL )
Jaccard( axis = -1, labels = NULL, pos_label = 1, average = "binary", sample_weight = NULL )
axis |
axis |
labels |
labels |
pos_label |
pos_label |
average |
average |
sample_weight |
sample_weight |
None
Implementation of the Jaccard coefficient that is lighter in RAM
JaccardCoeff(axis = 1)
JaccardCoeff(axis = 1)
axis |
axis |
None
Jaccard score for multi-label classification problems
JaccardMulti( thresh = 0.5, sigmoid = TRUE, labels = NULL, pos_label = 1, average = "macro", sample_weight = NULL )
JaccardMulti( thresh = 0.5, sigmoid = TRUE, labels = NULL, pos_label = 1, average = "macro", sample_weight = NULL )
thresh |
thresh |
sigmoid |
sigmoid |
labels |
labels |
pos_label |
pos_label |
average |
average |
sample_weight |
sample_weight |
None
Behaves like a list of 'items' but can also index with list of indices or masks
L(...)
L(...)
... |
arguments to pass |
Flattens input and output, same as nn$L1LossFlat
L1LossFlat(...)
L1LossFlat(...)
... |
parameters to pass |
Loss object
L2 regularization as adding 'wd*p' to 'p$grad'
l2_reg(p, lr, wd, do_wd = TRUE, ...)
l2_reg(p, lr, wd, do_wd = TRUE, ...)
p |
p |
lr |
learning rate |
wd |
weight decay |
do_wd |
do_wd |
... |
additional arguments to pass |
None
## Not run: tst_param = function(val, grad = NULL) { "Create a tensor with `val` and a gradient of `grad` for testing" res = tensor(val) %>% float() if(is.null(grad)) { grad = tensor(val / 10) } else { grad = tensor(grad) } res$grad = grad %>% float() res } p = tst_param(1., 0.1) l2_reg(p, 1., 0.1) ## End(Not run)
## Not run: tst_param = function(val, grad = NULL) { "Create a tensor with `val` and a gradient of `grad` for testing" res = tensor(val) %>% float() if(is.null(grad)) { grad = tensor(val / 10) } else { grad = tensor(grad) } res$grad = grad %>% float() res } p = tst_param(1., 0.1) l2_reg(p, 1., 0.1) ## End(Not run)
Basic type for a list of bounding boxes in an image
LabeledBBox(...)
LabeledBBox(...)
... |
parameters to pass |
None
Same as 'nn$Module', but no need for subclasses to call 'super()$__init__'
LabelSmoothingCrossEntropy(eps = 0.1, reduction = "mean")
LabelSmoothingCrossEntropy(eps = 0.1, reduction = "mean")
eps |
epsilon |
reduction |
reduction, defaults to mean |
Loss object
Same as 'nn$Module', but no need for subclasses to call 'super().__init__'
LabelSmoothingCrossEntropyFlat(...)
LabelSmoothingCrossEntropyFlat(...)
... |
parameters to pass |
Loss object
Step for LAMB with 'lr' on 'p'
lamb_step(p, lr, mom, step, sqr_mom, grad_avg, sqr_avg, eps, ...)
lamb_step(p, lr, mom, step, sqr_mom, grad_avg, sqr_avg, eps, ...)
p |
p |
lr |
learning rate |
mom |
momentum |
step |
step |
sqr_mom |
sqr momentum |
grad_avg |
gradient average |
sqr_avg |
sqr average |
eps |
epsilon |
... |
additional arguments to pass |
None
An easy way to create a pytorch layer for a simple 'func'
Lambda(func)
Lambda(func)
func |
function |
None
Create a 'Learner' with a language model from 'dls' and 'arch'.
language_model_learner( dls, arch, config = NULL, drop_mult = 1, backwards = FALSE, pretrained = TRUE, pretrained_fnames = NULL, opt_func = Adam(), lr = 0.001, cbs = NULL, metrics = NULL, path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE, moms = list(0.95, 0.85, 0.95), ... )
language_model_learner( dls, arch, config = NULL, drop_mult = 1, backwards = FALSE, pretrained = TRUE, pretrained_fnames = NULL, opt_func = Adam(), lr = 0.001, cbs = NULL, metrics = NULL, path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE, moms = list(0.95, 0.85, 0.95), ... )
dls |
dls |
arch |
arch |
config |
config |
drop_mult |
drop_mult |
backwards |
backwards |
pretrained |
pretrained |
pretrained_fnames |
pretrained_fnames |
opt_func |
opt_func |
lr |
lr |
cbs |
cbs |
metrics |
metrics |
path |
path |
model_dir |
model_dir |
wd |
wd |
wd_bn_bias |
wd_bn_bias |
train_bn |
train_bn |
moms |
moms |
... |
additional arguments |
None
Computes the local lr before weight decay is applied
larc_layer_lr(p, lr, trust_coeff, wd, eps, clip = TRUE, ...)
larc_layer_lr(p, lr, trust_coeff, wd, eps, clip = TRUE, ...)
p |
p |
lr |
learning rate |
trust_coeff |
trust_coeff |
wd |
weight decay |
eps |
epsilon |
clip |
clip |
... |
additional arguments to pass |
None
Step for LARC 'local_lr' on 'p'
larc_step(p, local_lr, grad_avg = NULL, ...)
larc_step(p, local_lr, grad_avg = NULL, ...)
p |
p |
local_lr |
local learning rate |
grad_avg |
gradient average |
... |
additional args to pass |
None
Return layer infos of 'model' on 'xb' (only support batch first inputs)
layer_info(learn, ...)
layer_info(learn, ...)
learn |
learner/model |
... |
additional arguments |
None
Learner
Learner(...)
Learner(...)
... |
parameters to pass |
None
## Not run: model = LitModel() data = Data_Loaders(model$train_dataloader(), model$val_dataloader())$cuda() learn = Learner(data, model, loss_func = F$cross_entropy, opt_func = Adam, metrics = accuracy) ## End(Not run)
## Not run: model = LitModel() data = Data_Loaders(model$train_dataloader(), model$val_dataloader())$cuda() learn = Learner(data, model, loss_func = F$cross_entropy, opt_func = Adam, metrics = accuracy) ## End(Not run)
Length
## S3 method for class 'torch.Tensor' length(x)
## S3 method for class 'torch.Tensor' length(x)
x |
tensor |
tensor
Length
## S3 method for class 'fastai.torch_core.TensorMask' length(x)
## S3 method for class 'fastai.torch_core.TensorMask' length(x)
x |
tensor |
tensor
Less
## S3 method for class 'torch.Tensor' a < b
## S3 method for class 'torch.Tensor' a < b
a |
tensor |
b |
tensor |
tensor
Less or equal
## S3 method for class 'torch.Tensor' a <= b
## S3 method for class 'torch.Tensor' a <= b
a |
tensor |
b |
tensor |
tensor
Apply 'fs' to the logits
LightingTfm(fs, ...)
LightingTfm(fs, ...)
fs |
fs |
... |
parameters to pass |
None
Module grouping 'BatchNorm1d', 'Dropout' and 'Linear' layers
LinBnDrop(n_in, n_out, bn = TRUE, p = 0, act = NULL, lin_first = FALSE)
LinBnDrop(n_in, n_out, bn = TRUE, p = 0, act = NULL, lin_first = FALSE)
n_in |
input shape |
n_out |
output shape |
bn |
bn |
p |
probability |
act |
activation |
lin_first |
linear first |
None
To go on top of a RNNCore module and create a Language Model.
LinearDecoder(n_out, n_hid, output_p = 0.1, tie_encoder = NULL, bias = TRUE)
LinearDecoder(n_out, n_hid, output_p = 0.1, tie_encoder = NULL, bias = TRUE)
n_out |
n_out |
n_hid |
n_hid |
output_p |
output_p |
tie_encoder |
tie_encoder |
bias |
bias |
None
A 'DataLoader' suitable for language modeling
LMDataLoader( dataset, lens = NULL, cache = 2, bs = 64, seq_len = 72, num_workers = 0, shuffle = FALSE, verbose = FALSE, do_setup = TRUE, pin_memory = FALSE, timeout = 0L, batch_size = NULL, drop_last = FALSE, indexed = NULL, n = NULL, device = NULL )
LMDataLoader( dataset, lens = NULL, cache = 2, bs = 64, seq_len = 72, num_workers = 0, shuffle = FALSE, verbose = FALSE, do_setup = TRUE, pin_memory = FALSE, timeout = 0L, batch_size = NULL, drop_last = FALSE, indexed = NULL, n = NULL, device = NULL )
dataset |
dataset |
lens |
lens |
cache |
cache |
bs |
bs |
seq_len |
seq_len |
num_workers |
num_workers |
shuffle |
shuffle |
verbose |
verbose |
do_setup |
do_setup |
pin_memory |
pin_memory |
timeout |
timeout |
batch_size |
batch_size |
drop_last |
drop_last |
indexed |
indexed |
n |
n |
device |
device |
text loader
Add functionality to 'TextLearner' when dealingwith a language model
Add functionality to 'TextLearner' when dealing with a language model
LMLearner( dls, model, alpha = 2, beta = 1, moms = list(0.8, 0.7, 0.8), loss_func = NULL, opt_func = Adam(), lr = 0.001, splitter = trainable_params(), cbs = NULL, metrics = NULL, path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE ) LMLearner( dls, model, alpha = 2, beta = 1, moms = list(0.8, 0.7, 0.8), loss_func = NULL, opt_func = Adam(), lr = 0.001, splitter = trainable_params(), cbs = NULL, metrics = NULL, path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE )
LMLearner( dls, model, alpha = 2, beta = 1, moms = list(0.8, 0.7, 0.8), loss_func = NULL, opt_func = Adam(), lr = 0.001, splitter = trainable_params(), cbs = NULL, metrics = NULL, path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE ) LMLearner( dls, model, alpha = 2, beta = 1, moms = list(0.8, 0.7, 0.8), loss_func = NULL, opt_func = Adam(), lr = 0.001, splitter = trainable_params(), cbs = NULL, metrics = NULL, path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE )
dls |
dls |
model |
model |
alpha |
alpha |
beta |
beta |
moms |
moms |
loss_func |
loss_func |
opt_func |
opt_func |
lr |
lr |
splitter |
splitter |
cbs |
cbs |
metrics |
metrics |
path |
path |
model_dir |
model_dir |
wd |
wd |
wd_bn_bias |
wd_bn_bias |
train_bn |
train_bn |
text loader
None
Return 'text' and the 'n_words' that come after
LMLearner_predict( text, n_words = 1, no_unk = TRUE, temperature = 1, min_p = NULL, no_bar = FALSE, decoder = decode_spec_tokens(), only_last_word = FALSE )
LMLearner_predict( text, n_words = 1, no_unk = TRUE, temperature = 1, min_p = NULL, no_bar = FALSE, decoder = decode_spec_tokens(), only_last_word = FALSE )
text |
text |
n_words |
n_words |
no_unk |
no_unk |
temperature |
temperature |
min_p |
min_p |
no_bar |
no_bar |
decoder |
decoder |
only_last_word |
only_last_word |
None
A helper function for getting a DataLoader for images in the folder 'test_path', with batch size 'bs', and number of workers 'num_workers'
load_dataset(test_path, bs = 4, num_workers = 4)
load_dataset(test_path, bs = 4, num_workers = 4)
test_path |
test path (directory) |
bs |
batch size |
num_workers |
number of workers |
None
Load 'wgts' in 'model' ignoring the names of the keys, just taking parameters in order
load_ignore_keys(model, wgts)
load_ignore_keys(model, wgts)
model |
model |
wgts |
wgts |
None
Open and load a 'PIL.Image' and convert to 'mode'
load_image(fn, mode = NULL)
load_image(fn, mode = NULL)
fn |
file name |
mode |
mode |
None
Load a 'Learner' object in 'fname', optionally putting it on the 'cpu'
load_learner(fname, cpu = TRUE)
load_learner(fname, cpu = TRUE)
fname |
fname |
cpu |
cpu or not |
learner object
Load 'model' from 'file' along with 'opt' (if available, and if 'with_opt')
load_model_text( file, model, opt, with_opt = NULL, device = NULL, strict = TRUE )
load_model_text( file, model, opt, with_opt = NULL, device = NULL, strict = TRUE )
file |
file |
model |
model |
opt |
opt |
with_opt |
with_opt |
device |
device |
strict |
strict |
None
Utility function to quickly load a tokenized csv and the corresponding counter
load_tokenized_csv(fname)
load_tokenized_csv(fname)
fname |
file name |
None
a loader from Catalyst
loaders()
loaders()
None
## Not run: # trigger download loaders() ## End(Not run)
## Not run: # trigger download loaders() ## End(Not run)
Log
## S3 method for class 'torch.Tensor' log(x, base = exp(1))
## S3 method for class 'torch.Tensor' log(x, base = exp(1))
x |
tensor |
base |
base parameter |
tensor
Log
## S3 method for class 'fastai.torch_core.TensorMask' log(x, base = exp(1))
## S3 method for class 'fastai.torch_core.TensorMask' log(x, base = exp(1))
x |
tensor |
base |
base parameter |
tensor
Log1p
## S3 method for class 'torch.Tensor' log1p(x)
## S3 method for class 'torch.Tensor' log1p(x)
x |
tensor |
tensor
Log1p
## S3 method for class 'fastai.torch_core.TensorMask' log1p(x)
## S3 method for class 'fastai.torch_core.TensorMask' log1p(x)
x |
tensor |
tensor
Logical_and
## S3 method for class 'torch.Tensor' x & y
## S3 method for class 'torch.Tensor' x & y
x |
tensor |
y |
tensor |
tensor
Logical_not
## S3 method for class 'torch.Tensor' !x
## S3 method for class 'torch.Tensor' !x
x |
tensor |
tensor
Logical_or
## S3 method for class 'torch.Tensor' x | y
## S3 method for class 'torch.Tensor' x | y
x |
tensor |
y |
tensor |
tensor
Log in to W&B.
login(anonymous = NULL, key = NULL, relogin = NULL, host = NULL, force = NULL)
login(anonymous = NULL, key = NULL, relogin = NULL, host = NULL, force = NULL)
anonymous |
must,never,allow,false,true |
key |
API key (secret) |
relogin |
relogin or not |
host |
host address |
force |
whether to force a user to be logged into wandb when running a script |
None
Lookahead
Lookahead(...)
Lookahead(...)
... |
parameters to pass |
None
Create a metric from 'loss_func.attr' named 'nm'
LossMetric(attr, nm = NULL)
LossMetric(attr, nm = NULL)
attr |
attr |
nm |
nm |
None
Launch a mock training to find a good learning rate, return lr_min, lr_steep if 'suggestions' is TRUE
lr_find( object, start_lr = 1e-07, end_lr = 10, num_it = 100, stop_div = TRUE, ... )
lr_find( object, start_lr = 1e-07, end_lr = 10, num_it = 100, stop_div = TRUE, ... )
object |
learner |
start_lr |
starting learning rate |
end_lr |
end learning rate |
num_it |
number of iterations |
stop_div |
stop div or not |
... |
additional arguments to pass |
data frame
## Not run: model %>% lr_find() model %>% plot_lr_find(dpi = 200) ## End(Not run)
## Not run: model %>% lr_find() model %>% plot_lr_find(dpi = 200) ## End(Not run)
Mean absolute error between 'inp' and 'targ'.
mae(inp, targ)
mae(inp, targ)
inp |
predictions |
targ |
targets |
None
Create a vocab of 'max_vocab' size from 'Counter' 'count' with items present more than 'min_freq'
make_vocab(count, min_freq = 3, max_vocab = 60000, special_toks = NULL)
make_vocab(count, min_freq = 3, max_vocab = 60000, special_toks = NULL)
count |
count |
min_freq |
min_freq |
max_vocab |
max_vocab |
special_toks |
special_toks |
None
Delegates ('__call__','decode','setup') to ('encodes','decodes','setups') if 'split_idx' matches
Mask_create(enc = NULL, dec = NULL, split_idx = NULL, order = NULL)
Mask_create(enc = NULL, dec = NULL, split_idx = NULL, order = NULL)
enc |
encoder |
dec |
decoder |
split_idx |
split by index |
order |
order |
None
Mask from blur
mask_from_blur(img, window, sigma = 0.3, thresh = 0.05, remove_max = TRUE)
mask_from_blur(img, window, sigma = 0.3, thresh = 0.05, remove_max = TRUE)
img |
image |
window |
windowing effect |
sigma |
sigma |
thresh |
thresholf point |
remove_max |
remove maximum or not |
A 'DataLoader' with a custom 'collate_fn' that batches items as required for inferring the model.
mask_rcnn_infer_dl(dataset, batch_tfms = NULL, ...)
mask_rcnn_infer_dl(dataset, batch_tfms = NULL, ...)
dataset |
Possibly a 'Dataset' object, but more generally, any 'Sequence' that returns records. |
batch_tfms |
Transforms to be applied at the batch level. **dataloader_kwargs: Keyword arguments that will be internally passed to a Pytorch 'DataLoader'. The parameter 'collate_fn' is already defined internally and cannot be passed here. |
... |
additional arguments |
None
Fastai 'Learner' adapted for MaskRCNN.
mask_rcnn_learner(dls, model, cbs = NULL, ...)
mask_rcnn_learner(dls, model, cbs = NULL, ...)
dls |
'Sequence' of 'DataLoaders' passed to the 'Learner'. The first one will be used for training and the second for validation. |
model |
The model to train. |
cbs |
Optional 'Sequence' of callbacks. |
... |
learner_kwargs: Keyword arguments that will be internally passed to 'Learner'. |
model
MaskRCNN model implemented by torchvision.
mask_rcnn_model( num_classes, backbone = NULL, remove_internal_transforms = TRUE, pretrained = TRUE )
mask_rcnn_model( num_classes, backbone = NULL, remove_internal_transforms = TRUE, pretrained = TRUE )
num_classes |
Number of classes. |
backbone |
Backbone model to use. Defaults to a resnet50_fpn model. |
remove_internal_transforms |
The torchvision model internally applies transforms like resizing and normalization, but we already do this at the ‘Dataset' level, so it’s safe to remove those internal transforms. |
pretrained |
Argument passed to 'maskrcnn_resnet50_fpn' if 'backbone is NULL'. By default it is set to TRUE: this is generally used when training a new model (transfer learning). 'pretrained = FALSE' is used during inference (prediction) for cases where the users have their own pretrained weights. **mask_rcnn_kwargs: Keyword arguments that internally are going to be passed to 'torchvision.models.detection.mask_rcnn.MaskRCNN'. |
model
Mask RCNN predict dataloader
mask_rcnn_predict_dl(model, infer_dl, show_pbar = TRUE)
mask_rcnn_predict_dl(model, infer_dl, show_pbar = TRUE)
model |
model |
infer_dl |
infer_dl |
show_pbar |
show_pbar |
None
A 'DataLoader' with a custom 'collate_fn' that batches items as required for training the model.
mask_rcnn_train_dl(dataset, batch_tfms = NULL, ...)
mask_rcnn_train_dl(dataset, batch_tfms = NULL, ...)
dataset |
Possibly a 'Dataset' object, but more generally, any 'Sequence' that returns records. |
batch_tfms |
Transforms to be applied at the batch level. |
... |
dataloader_kwargs: Keyword arguments that will be internally passed to a Pytorch 'DataLoader'. The parameter 'collate_fn' is already defined internally and cannot be passed here. |
None
A 'DataLoader' with a custom 'collate_fn' that batches items as required for training the model.
mask_rcnn_valid_dl(dataset, batch_tfms = NULL, ...)
mask_rcnn_valid_dl(dataset, batch_tfms = NULL, ...)
dataset |
Possibly a 'Dataset' object, but more generally, any 'Sequence' that returns records. |
batch_tfms |
Transforms to be applied at the batch level. |
... |
dataloader_kwargs: Keyword arguments that will be internally passed to a Pytorch 'DataLoader'. The parameter 'collate_fn' is already defined internally and cannot be passed here. |
None
Mask elements of 'x' with 'neutral' with probability '1-p'
mask_tensor(x, p = 0.5, neutral = 0, batch = FALSE)
mask_tensor(x, p = 0.5, neutral = 0, batch = FALSE)
x |
tensor |
p |
probability |
neutral |
neutral |
batch |
batch |
None
Mask2bbox
mask2bbox(mask, convert = TRUE)
mask2bbox(mask, convert = TRUE)
mask |
mask |
convert |
to R matrix |
tensor
A 'TransformBlock' for segmentation masks, potentially with 'codes'
MaskBlock(codes = NULL)
MaskBlock(codes = NULL)
codes |
codes |
block
Pool 'MultiBatchEncoder' outputs into one vector [last_hidden, max_pool, avg_pool]
masked_concat_pool(output, mask, bptt)
masked_concat_pool(output, mask, bptt)
output |
output |
mask |
mask |
bptt |
bptt |
None
Google SpecAugment frequency masking from https://arxiv.org/abs/1904.08779.
MaskFreq(num_masks = 1, size = 20, start = NULL, val = NULL)
MaskFreq(num_masks = 1, size = 20, start = NULL, val = NULL)
num_masks |
number of masks |
size |
size |
start |
starting point |
val |
value |
None
Google SpecAugment time masking from https://arxiv.org/abs/1904.08779.
MaskTime(num_masks = 1, size = 20, start = NULL, val = NULL)
MaskTime(num_masks = 1, size = 20, start = NULL, val = NULL)
num_masks |
number of masks |
size |
size |
start |
starting point |
val |
value |
None
Convert the embedding in 'old_wgts' to go from 'old_vocab' to 'new_vocab'.
match_embeds(old_wgts, old_vocab, new_vocab)
match_embeds(old_wgts, old_vocab, new_vocab)
old_wgts |
old_wgts |
old_vocab |
old_vocab |
new_vocab |
new_vocab |
None
Matthews correlation coefficient for single-label classification problems
MatthewsCorrCoef(...)
MatthewsCorrCoef(...)
... |
parameters to pass |
None
Matthews correlation coefficient for multi-label classification problems
MatthewsCorrCoefMulti(thresh = 0.5, sigmoid = TRUE, sample_weight = NULL)
MatthewsCorrCoefMulti(thresh = 0.5, sigmoid = TRUE, sample_weight = NULL)
thresh |
thresh |
sigmoid |
sigmoid |
sample_weight |
sample_weight |
None
Max
## S3 method for class 'torch.Tensor' max(a, ..., na.rm = FALSE)
## S3 method for class 'torch.Tensor' max(a, ..., na.rm = FALSE)
a |
tensor |
... |
additional parameters |
na.rm |
remove NAs |
tensor
Max
## S3 method for class 'fastai.torch_core.TensorMask' max(a, ..., na.rm = FALSE)
## S3 method for class 'fastai.torch_core.TensorMask' max(a, ..., na.rm = FALSE)
a |
tensor |
... |
additional parameters |
na.rm |
remove NAs |
tensor
nn.MaxPool layer for 'ndim'
MaxPool(ks = 2, stride = NULL, padding = 0, ndim = 2, ceil_mode = FALSE)
MaxPool(ks = 2, stride = NULL, padding = 0, ndim = 2, ceil_mode = FALSE)
ks |
kernel size |
stride |
the stride of the window. Default value is kernel_size |
padding |
implicit zero padding to be added on both sides |
ndim |
dimension number |
ceil_mode |
when True, will use ceil instead of floor to compute the output shape |
None
Add empty dimension if it is a rank 1 tensor/array
maybe_unsqueeze(x)
maybe_unsqueeze(x)
x |
R array/matrix/tensor |
array
Turns on dropout during inference, allowing you to call Learner$get_preds multiple times to approximate your model uncertainty using Monte Carlo Dropout. https://arxiv.org/pdf/1506.02142.pdf
MCDropoutCallback(...)
MCDropoutCallback(...)
... |
arguments to pass |
None
Mean of tensor
## S3 method for class 'fastai.torch_core.TensorMask' mean(x, ...)
## S3 method for class 'fastai.torch_core.TensorMask' mean(x, ...)
x |
tensor |
... |
additional parameters to pass |
tensor
Mean of tensor
## S3 method for class 'torch.Tensor' mean(x, ...)
## S3 method for class 'torch.Tensor' mean(x, ...)
x |
tensor |
... |
additional parameters to pass |
tensor
Merge a shortcut with the result of the module by adding them or concatenating them if 'dense=TRUE'.
MergeLayer(dense = FALSE)
MergeLayer(dense = FALSE)
dense |
dense |
None
Lightning module
migrating_lightning()
migrating_lightning()
None
Min
## S3 method for class 'torch.Tensor' min(a, ..., na.rm = FALSE)
## S3 method for class 'torch.Tensor' min(a, ..., na.rm = FALSE)
a |
tensor |
... |
additional parameters |
na.rm |
remove NAs |
tensor
Min
## S3 method for class 'fastai.torch_core.TensorMask' min(a, ..., na.rm = FALSE)
## S3 method for class 'fastai.torch_core.TensorMask' min(a, ..., na.rm = FALSE)
a |
tensor |
... |
additional parameters |
na.rm |
remove NAs |
tensor
Class Mish
Mish_(...)
Mish_(...)
... |
parameters to pass |
None
Records operation history and defines formulas for differentiating ops.
MishJitAutoFn(...)
MishJitAutoFn(...)
... |
parameters to pass |
None
A handler class for implementing 'MixUp' style scheduling
MixHandler(alpha = 0.5)
MixHandler(alpha = 0.5)
alpha |
alpha |
None
Implementation of https://arxiv.org/abs/1710.09412
MixUp(alpha = 0.4)
MixUp(alpha = 0.4)
alpha |
alpha |
None
Pass a dummy input through the model 'm' to get the various sizes of activations.
model_sizes(m, size = list(64, 64))
model_sizes(m, size = list(64, 64))
m |
m parameter |
size |
size |
None
Callback that resets the model at each validation/training step
ModelResetter(...)
ModelResetter(...)
... |
arguments to pass |
None
Step for SGD with momentum with 'lr'
momentum_step(p, lr, grad_avg, ...)
momentum_step(p, lr, grad_avg, ...)
p |
p |
lr |
learning rate |
grad_avg |
grad average |
... |
additional arguments to pass |
None
Sorted descending list of largest non-diagonal entries of confusion matrix, presented as actual, predicted, number of occurrences.
most_confused(interp, min_val = 1)
most_confused(interp, min_val = 1)
interp |
interpretation object |
min_val |
minimum value |
data frame
Mean squared error between 'inp' and 'targ'.
mse(inp, targ)
mse(inp, targ)
inp |
predictions |
targ |
targets |
None
## Not run: model = dls %>% tabular_learner(layers=c(200,100,100,200), metrics = list(mse(),rmse()) ) ## End(Not run)
## Not run: model = dls %>% tabular_learner(layers=c(200,100,100,200), metrics = list(mse(),rmse()) ) ## End(Not run)
Flattens input and output, same as nn$MSELoss
MSELossFlat(...)
MSELossFlat(...)
... |
parameters to pass |
Loss object
Mean squared logarithmic error between 'inp' and 'targ'.
msle(inp, targ)
msle(inp, targ)
inp |
predictions |
targ |
targets |
None
Reversible transform of multi-category strings to 'vocab' id
MultiCategorize(vocab = NULL, add_na = FALSE)
MultiCategorize(vocab = NULL, add_na = FALSE)
vocab |
vocabulary |
add_na |
add NA |
None
'TransformBlock' for multi-label categorical targets
MultiCategoryBlock(encoded = FALSE, vocab = NULL, add_na = FALSE)
MultiCategoryBlock(encoded = FALSE, vocab = NULL, add_na = FALSE)
encoded |
encoded or not |
vocab |
vocabulary |
add_na |
add NA |
Block object
Multiply
## S3 method for class 'torch.Tensor' a * b
## S3 method for class 'torch.Tensor' a * b
a |
tensor |
b |
tensor |
tensor
Provides the ability to apply different loss functions to multi-modal targets/predictions
MultiTargetLoss(...)
MultiTargetLoss(...)
... |
additional arguments |
None
Modify tensor
narrow(tensor, slice)
narrow(tensor, slice)
tensor |
torch tensor |
slice |
dimension |
tensor
Net model from Migrating_Pytorch
Net()
Net()
model
## Not run: Net() ## End(Not run)
## Not run: Net() ## End(Not run)
Fastai custom loss
nn_loss(loss_fn, name = "Custom_Loss")
nn_loss(loss_fn, name = "Custom_Loss")
loss_fn |
pass custom model function |
name |
set name for nn_module |
None
Fastai NN module
nn_module(model_fn, name = "Custom_Model", gpu = TRUE)
nn_module(model_fn, name = "Custom_Model", gpu = TRUE)
model_fn |
pass custom model function |
name |
set name for nn_module |
gpu |
move model to GPU |
None
A context manager to evaluate 'loss_func' with none reduce.
NoneReduce(loss_func)
NoneReduce(loss_func)
loss_func |
loss function |
None
Normalize 'x' with 'nrm', then apply 'f', then denormalize
norm_apply_denorm(x, f, nrm)
norm_apply_denorm(x, f, nrm)
x |
tensor |
f |
function |
nrm |
nrm |
None
Normalize the continuous variables.
Normalize(cat_names, cont_names)
Normalize(cat_names, cont_names)
cat_names |
cat_names |
cont_names |
cont_names |
None
Normalize from stats
Normalize_from_stats(mean, std, dim = 1, ndim = 4, cuda = TRUE)
Normalize_from_stats(mean, std, dim = 1, ndim = 4, cuda = TRUE)
mean |
mean |
std |
standard deviation |
dim |
dimension |
ndim |
number of dimensions |
cuda |
cuda or not |
list
Normalize the x variables.
NormalizeTS(enc = NULL, dec = NULL, split_idx = NULL, order = NULL)
NormalizeTS(enc = NULL, dec = NULL, split_idx = NULL, order = NULL)
enc |
encoder |
dec |
decoder |
split_idx |
split by index |
order |
order |
None
Logical_not
## S3 method for class 'fastai.torch_core.TensorMask' !x
## S3 method for class 'fastai.torch_core.TensorMask' !x
x |
tensor |
tensor
Not equal
## S3 method for class 'torch.Tensor' a != b
## S3 method for class 'torch.Tensor' a != b
a |
tensor |
b |
tensor |
tensor
Not equal
## S3 method for class 'fastai.torch_core.TensorMask' a != b
## S3 method for class 'fastai.torch_core.TensorMask' a != b
a |
tensor |
b |
tensor |
tensor
Return the number of output features for 'm'.
num_features_model(m)
num_features_model(m)
m |
m parameter |
None
Reversible transform of tokenized texts to numericalized ids
Numericalize( vocab = NULL, min_freq = 3, max_vocab = 60000, special_toks = NULL, pad_tok = NULL )
Numericalize( vocab = NULL, min_freq = 3, max_vocab = 60000, special_toks = NULL, pad_tok = NULL )
vocab |
vocab |
min_freq |
min_freq |
max_vocab |
max_vocab |
special_toks |
special_toks |
pad_tok |
pad_tok |
None
Randomly crop an image to 'size'
OldRandomCrop(size, pad_mode = "zeros", ...)
OldRandomCrop(size, pad_mode = "zeros", ...)
size |
size |
pad_mode |
padding mode |
... |
additional arguments |
None
One batch
one_batch(object, convert = FALSE, ...)
one_batch(object, convert = FALSE, ...)
object |
data loader |
convert |
to R matrix |
... |
additional parameters to pass |
tensor
## Not run: # get batch from data loader batch = dls %>% one_batch() ## End(Not run)
## Not run: # get batch from data loader batch = dls %>% one_batch() ## End(Not run)
Transform that creates AudioTensors from a list of files.
OpenAudio(items)
OpenAudio(items)
items |
vector, items |
None
Replace metric 'f' with a version that optimizes argument 'argname'
optim_metric(f, argname, bounds, tol = 0.01, do_neg = TRUE, get_x = FALSE)
optim_metric(f, argname, bounds, tol = 0.01, do_neg = TRUE, get_x = FALSE)
f |
f |
argname |
argname |
bounds |
bounds |
tol |
tol |
do_neg |
do_neg |
get_x |
get_x |
None
Optimizer
Optimizer(...)
Optimizer(...)
... |
parameters to pass |
None
OptimWrapper
OptimWrapper(...)
OptimWrapper(...)
... |
parameters to pass |
None
Logical_or
## S3 method for class 'fastai.torch_core.TensorMask' x | y
## S3 method for class 'fastai.torch_core.TensorMask' x | y
x |
tensor |
y |
tensor |
tensor
An environment supporting TPUs
os_environ_tpu(text = "COLAB_TPU_ADDR")
os_environ_tpu(text = "COLAB_TPU_ADDR")
text |
string to pass to environment |
None
Pad_conv_norm_relu
pad_conv_norm_relu( ch_in, ch_out, pad_mode, norm_layer, ks = 3, bias = TRUE, pad = 1, stride = 1, activ = TRUE, init = nn()$init$kaiming_normal_, init_gain = 0.02 )
pad_conv_norm_relu( ch_in, ch_out, pad_mode, norm_layer, ks = 3, bias = TRUE, pad = 1, stride = 1, activ = TRUE, init = nn()$init$kaiming_normal_, init_gain = 0.02 )
ch_in |
input |
ch_out |
output |
pad_mode |
padding mode |
norm_layer |
normalization layer |
ks |
kernel size |
bias |
bias |
pad |
padding |
stride |
stride |
activ |
activation |
init |
initializer |
init_gain |
init gain |
None
Function that collect 'samples' and adds padding
pad_input( samples, pad_idx = 1, pad_fields = 0, pad_first = FALSE, backwards = FALSE )
pad_input( samples, pad_idx = 1, pad_fields = 0, pad_first = FALSE, backwards = FALSE )
samples |
samples |
pad_idx |
pad_idx |
pad_fields |
pad_fields |
pad_first |
pad_first |
backwards |
backwards |
None
Pad 'samples' by adding padding by chunks of size 'seq_len'
pad_input_chunk(samples, pad_idx = 1, pad_first = TRUE, seq_len = 72)
pad_input_chunk(samples, pad_idx = 1, pad_first = TRUE, seq_len = 72)
samples |
samples |
pad_idx |
pad_idx |
pad_first |
pad_first |
seq_len |
seq_len |
None
Applies 'func' in parallel to 'items', using 'n_workers'
parallel(f, items, ...)
parallel(f, items, ...)
f |
file names |
items |
items |
... |
additional arguments |
None
Calls optional 'setup' on 'tok' before launching 'TokenizeWithRules' using 'parallel_gen
parallel_tokenize(items, tok = NULL, rules = NULL, n_workers = 6)
parallel_tokenize(items, tok = NULL, rules = NULL, n_workers = 6)
items |
items |
tok |
tokenizer |
rules |
rules |
n_workers |
n_workers |
None
Return all parameters of 'm'
params(m)
params(m)
m |
parameters |
None
Schedule hyper-parameters according to 'scheds'
ParamScheduler(scheds)
ParamScheduler(scheds)
scheds |
scheds |
None
Label 'item' with the parent folder name.
parent_label(o)
parent_label(o)
o |
string, dir path |
vector
Adds areas method to parser
parsers_AreasMixin(...)
parsers_AreasMixin(...)
... |
arguments to pass |
None
Adds bboxes method to parser
parsers_BBoxesMixin(...)
parsers_BBoxesMixin(...)
... |
arguments to pass |
None
Parser with required mixins for Faster RCNN.
parsers_FasterRCNN(...)
parsers_FasterRCNN(...)
... |
arguments to pass |
None
Adds filepath method to parser
parsers_FilepathMixin(...)
parsers_FilepathMixin(...)
... |
arguments to pass |
None
Adds imageid method to parser
parsers_ImageidMixin(...)
parsers_ImageidMixin(...)
... |
arguments to pass |
None
Adds iscrowds method to parser
parsers_IsCrowdsMixin(...)
parsers_IsCrowdsMixin(...)
... |
arguments to pass |
None
Adds labels method to parser
parsers_LabelsMixin(...)
parsers_LabelsMixin(...)
... |
arguments to pass |
None
Parser with required mixins for Mask RCNN.
parsers_MaskRCNN(...)
parsers_MaskRCNN(...)
... |
arguments to pass |
None
Adds masks method to parser
parsers_MasksMixin(...)
parsers_MasksMixin(...)
... |
arguments to pass |
None
Adds image_width_height method to parser
parsers_SizeMixin(...)
parsers_SizeMixin(...)
... |
arguments to pass |
None
Voc parser
parsers_voc(annotations_dir, images_dir, class_map, masks_dir = NULL)
parsers_voc(annotations_dir, images_dir, class_map, masks_dir = NULL)
annotations_dir |
annotations_dir |
images_dir |
images_dir |
class_map |
class_map |
masks_dir |
masks_dir |
None
partial(func, *args, **keywords) - new function with partial application
partial(...)
partial(...)
... |
additional arguments |
None
## Not run: generator = basic_generator(out_size = 64, n_channels = 3, n_extra_layers = 1) critic = basic_critic(in_size = 64, n_channels = 3, n_extra_layers = 1, act_cls = partial(nn$LeakyReLU, negative_slope = 0.2)) ## End(Not run)
## Not run: generator = basic_generator(out_size = 64, n_channels = 3, n_extra_layers = 1) critic = basic_critic(in_size = 64, n_channels = 3, n_extra_layers = 1, act_cls = partial(nn$LeakyReLU, negative_slope = 0.2)) ## End(Not run)
Select randomly partial quantity of data at each epoch
PartialDL( dataset = NULL, bs = NULL, partial_n = NULL, shuffle = FALSE, num_workers = NULL, verbose = FALSE, do_setup = TRUE, pin_memory = FALSE, timeout = 0, batch_size = NULL, drop_last = FALSE, indexed = NULL, n = NULL, device = NULL, persistent_workers = FALSE )
PartialDL( dataset = NULL, bs = NULL, partial_n = NULL, shuffle = FALSE, num_workers = NULL, verbose = FALSE, do_setup = TRUE, pin_memory = FALSE, timeout = 0, batch_size = NULL, drop_last = FALSE, indexed = NULL, n = NULL, device = NULL, persistent_workers = FALSE )
dataset |
dataset |
bs |
bs |
partial_n |
partial_n |
shuffle |
shuffle |
num_workers |
num_workers |
verbose |
verbose |
do_setup |
do_setup |
pin_memory |
pin_memory |
timeout |
timeout |
batch_size |
batch_size |
drop_last |
drop_last |
indexed |
indexed |
n |
n |
device |
device |
persistent_workers |
persistent_workers |
None
Layer that applies 'partial(func, ...)'
PartialLambda(func)
PartialLambda(func)
func |
function |
None
Compute PCA of 'x' with 'k' dimensions.
pca(object, k = 3, convert = TRUE)
pca(object, k = 3, convert = TRUE)
object |
an object to apply PCA |
k |
number of dimensions |
convert |
to R matrix |
tensor
Pearson correlation coefficient for regression problem
PearsonCorrCoef( dim_argmax = NULL, activation = "no", thresh = NULL, to_np = FALSE, invert_arg = FALSE, flatten = TRUE )
PearsonCorrCoef( dim_argmax = NULL, activation = "no", thresh = NULL, to_np = FALSE, invert_arg = FALSE, flatten = TRUE )
dim_argmax |
dim_argmax |
activation |
activation |
thresh |
thresh |
to_np |
to_np |
invert_arg |
invert_arg |
flatten |
flatten |
None
Perplexity
Perplexity(...)
Perplexity(...)
... |
parameters to pass |
None
A pipeline of composed (for encode/decode) transforms, setup with types
Pipeline(funcs = NULL, split_idx = NULL)
Pipeline(funcs = NULL, split_idx = NULL)
funcs |
functions |
split_idx |
split by index |
None
Upsample by 'scale' from 'ni' filters to 'nf' (default 'ni'), using 'nn.PixelShuffle'.
PixelShuffle_ICNR( ni, nf = NULL, scale = 2, blur = FALSE, norm_type = 3, act_cls = nn()$ReLU )
PixelShuffle_ICNR( ni, nf = NULL, scale = 2, blur = FALSE, norm_type = 3, act_cls = nn()$ReLU )
ni |
input shape |
nf |
number of features / outputs |
scale |
scale |
blur |
blur |
norm_type |
normalziation type |
act_cls |
activation |
None
Plot dicom
plot(x, y, ..., dpi = 100)
plot(x, y, ..., dpi = 100)
x |
model |
y |
y axis |
... |
parameters to pass |
dpi |
dots per inch |
None
Plot_bs_find
plot_bs_find(object, ..., dpi = 250)
plot_bs_find(object, ..., dpi = 250)
object |
model |
... |
additional arguments |
dpi |
dots per inch |
None
Plot the confusion matrix, with 'title' and using 'cmap'.
plot_confusion_matrix( interp, normalize = FALSE, title = "Confusion matrix", cmap = "Blues", norm_dec = 2, plot_txt = TRUE, figsize = c(4, 4), ..., dpi = 120 )
plot_confusion_matrix( interp, normalize = FALSE, title = "Confusion matrix", cmap = "Blues", norm_dec = 2, plot_txt = TRUE, figsize = c(4, 4), ..., dpi = 120 )
interp |
interpretation object |
normalize |
normalize |
title |
title |
cmap |
color map |
norm_dec |
norm dec |
plot_txt |
plot text |
figsize |
plot size |
... |
additional parameters to pass |
dpi |
dots per inch |
None
## Not run: interp = ClassificationInterpretation_from_learner(model) interp %>% plot_confusion_matrix(dpi = 90,figsize = c(6,6)) ## End(Not run)
## Not run: interp = ClassificationInterpretation_from_learner(model) interp %>% plot_confusion_matrix(dpi = 90,figsize = c(6,6)) ## End(Not run)
Plot the losses from 'skip_start' and onward
plot_loss(object, skip_start = 5, with_valid = TRUE, dpi = 200)
plot_loss(object, skip_start = 5, with_valid = TRUE, dpi = 200)
object |
model |
skip_start |
n points to skip the start |
with_valid |
with validation |
dpi |
dots per inch |
None
Plot the result of an LR Finder test (won't work if you didn't do 'lr_find(learn)' before)
plot_lr_find(object, skip_end = 5, dpi = 250)
plot_lr_find(object, skip_end = 5, dpi = 250)
object |
model |
skip_end |
n points to skip the end |
dpi |
dots per inch |
None
Plot_top_losses
plot_top_losses(interp, k, largest = TRUE, figsize = c(7, 5), ..., dpi = 90)
plot_top_losses(interp, k, largest = TRUE, figsize = c(7, 5), ..., dpi = 90)
interp |
interpretation object |
k |
number of images |
largest |
largest |
figsize |
plot size |
... |
additional parameters to pass |
dpi |
dots per inch |
None
## Not run: # get interperetation from learn object, the model. interp = ClassificationInterpretation_from_learner(learn) interp %>% plot_top_losses(k = 9, figsize = c(15,11)) ## End(Not run)
## Not run: # get interperetation from learn object, the model. interp = ClassificationInterpretation_from_learner(learn) interp %>% plot_top_losses(k = 9, figsize = c(15,11)) ## End(Not run)
A 'TransformBlock' for points in an image
PointBlock()
PointBlock()
None
Scale a tensor representing points
PointScaler(do_scale = TRUE, y_first = FALSE)
PointScaler(do_scale = TRUE, y_first = FALSE)
do_scale |
do scale |
y_first |
y first |
None
Pooled self attention layer for 2d.
PooledSelfAttention2d(n_channels)
PooledSelfAttention2d(n_channels)
n_channels |
number of channels |
None
Combine 'nn.AdaptiveAvgPool2d' and 'Flatten'.
PoolFlatten(pool_type = "Avg")
PoolFlatten(pool_type = "Avg")
pool_type |
pooling type |
None
Create a linear classifier with pooling
PoolingLinearClassifier(dims, ps, bptt, y_range = NULL)
PoolingLinearClassifier(dims, ps, bptt, y_range = NULL)
dims |
dims |
ps |
ps |
bptt |
bptt |
y_range |
y_range |
None
Pow
## S3 method for class 'torch.Tensor' a ^ b
## S3 method for class 'torch.Tensor' a ^ b
a |
tensor |
b |
tensor |
tensor
Pre_process_squad
pre_process_squad(row, hf_arch, hf_tokenizer)
pre_process_squad(row, hf_arch, hf_tokenizer)
row |
row in dataframe |
hf_arch |
architecture |
hf_tokenizer |
tokenizer |
None
Precision for single-label classification problems
Precision( axis = -1, labels = NULL, pos_label = 1, average = "binary", sample_weight = NULL )
Precision( axis = -1, labels = NULL, pos_label = 1, average = "binary", sample_weight = NULL )
axis |
axis |
labels |
labels |
pos_label |
pos_label |
average |
average |
sample_weight |
sample_weight |
None
Precision for multi-label classification problems
PrecisionMulti( thresh = 0.5, sigmoid = TRUE, labels = NULL, pos_label = 1, average = "macro", sample_weight = NULL )
PrecisionMulti( thresh = 0.5, sigmoid = TRUE, labels = NULL, pos_label = 1, average = "macro", sample_weight = NULL )
thresh |
thresh |
sigmoid |
sigmoid |
labels |
labels |
pos_label |
pos_label |
average |
average |
sample_weight |
sample_weight |
None
Prediction on 'item', fully decoded, loss function decoded and probabilities
## S3 method for class 'fastai.learner.Learner' predict(object, row, ...)
## S3 method for class 'fastai.learner.Learner' predict(object, row, ...)
object |
the model |
row |
row |
... |
additional arguments to pass |
data frame
Prediction on 'item', fully decoded, loss function decoded and probabilities
## S3 method for class 'fastai.tabular.learner.TabularLearner' predict(object, row, ...)
## S3 method for class 'fastai.tabular.learner.TabularLearner' predict(object, row, ...)
object |
the model |
row |
row |
... |
additional arguments to pass |
data frame
Perplexity (exponential of cross-entropy loss) for Language Models
preplexity(...)
preplexity(...)
... |
parameters to pass |
None
Preprocess audio files in 'path' in parallel using 'n_workers'
preprocess_audio_folder( path, folders = NULL, output_dir = NULL, sample_rate = 16000, force_mono = TRUE, crop_signal_to = NULL )
preprocess_audio_folder( path, folders = NULL, output_dir = NULL, sample_rate = 16000, force_mono = TRUE, crop_signal_to = NULL )
path |
directory, path |
folders |
folders |
output_dir |
output directory |
sample_rate |
sample rate |
force_mono |
force mono or not |
crop_signal_to |
int, crop signal |
None
Creates an audio tensor and run the basic preprocessing transforms on it.
PreprocessAudio(sample_rate = 16000, force_mono = TRUE, crop_signal_to = NULL)
PreprocessAudio(sample_rate = 16000, force_mono = TRUE, crop_signal_to = NULL)
sample_rate |
sample rate |
force_mono |
force mono or not |
crop_signal_to |
int, crop signal |
Used while preprocessing the audios, this is not a 'Transform'.
None
Print model
## S3 method for class 'fastai.learner.Learner' print(x, ...)
## S3 method for class 'fastai.learner.Learner' print(x, ...)
x |
object |
... |
additional parameters to pass |
None
Print tabular model
## S3 method for class 'fastai.tabular.learner.TabularLearner' print(x, ...)
## S3 method for class 'fastai.tabular.learner.TabularLearner' print(x, ...)
x |
model |
... |
additional parameters to pass |
None
prints dicom file
## S3 method for class 'pydicom.dataset.FileDataset' print(x, ...)
## S3 method for class 'pydicom.dataset.FileDataset' print(x, ...)
x |
dicom file |
... |
additional parameters to pass |
None
Pandas apply
py_apply(df, ...)
py_apply(df, ...)
df |
dataframe |
... |
additional arguments |
dataframe
Qhadam_step
qhadam_step(p, lr, mom, sqr_mom, sqr_avg, nu_1, nu_2, step, grad_avg, eps, ...)
qhadam_step(p, lr, mom, sqr_mom, sqr_avg, nu_1, nu_2, step, grad_avg, eps, ...)
p |
p |
lr |
learning rate |
mom |
momentum |
sqr_mom |
sqr momentum |
sqr_avg |
sqr average |
nu_1 |
nu_1 |
nu_2 |
nu_2 |
step |
step |
grad_avg |
gradient average |
eps |
epsilon |
... |
additional arguments to pass |
None
Apply a multiple layer Quasi-Recurrent Neural Network (QRNN) to an input sequence.
QRNN( input_size, hidden_size, n_layers = 1, batch_first = TRUE, dropout = 0, bidirectional = FALSE, save_prev_x = FALSE, zoneout = 0, window = NULL, output_gate = TRUE )
QRNN( input_size, hidden_size, n_layers = 1, batch_first = TRUE, dropout = 0, bidirectional = FALSE, save_prev_x = FALSE, zoneout = 0, window = NULL, output_gate = TRUE )
input_size |
input_size |
hidden_size |
|
n_layers |
n_layers |
batch_first |
batch_first |
dropout |
dropout |
bidirectional |
bidirectional |
save_prev_x |
save_prev_x |
zoneout |
zoneout |
window |
window |
output_gate |
output_gate |
None
Apply a single layer Quasi-Recurrent Neural Network (QRNN) to an input sequence.
QRNNLayer( input_size, hidden_size = NULL, save_prev_x = FALSE, zoneout = 0, window = 1, output_gate = TRUE, batch_first = TRUE, backward = FALSE )
QRNNLayer( input_size, hidden_size = NULL, save_prev_x = FALSE, zoneout = 0, window = 1, output_gate = TRUE, batch_first = TRUE, backward = FALSE )
input_size |
input_size |
hidden_size |
|
save_prev_x |
save_prev_x |
zoneout |
zoneout |
window |
window |
output_gate |
output_gate |
batch_first |
batch_first |
backward |
backward |
None
R2 score between predictions and targets
R2Score(sample_weight = NULL)
R2Score(sample_weight = NULL)
sample_weight |
sample_weight |
None
Step for RAdam with 'lr' on 'p'
radam_step(p, lr, mom, step, sqr_mom, grad_avg, sqr_avg, eps, beta, ...)
radam_step(p, lr, mom, step, sqr_mom, grad_avg, sqr_avg, eps, beta, ...)
p |
p |
lr |
learning rate |
mom |
momentum |
step |
step |
sqr_mom |
sqr momentum |
grad_avg |
grad average |
sqr_avg |
sqr average |
eps |
epsilon |
beta |
beta |
... |
additional arguments to pass |
None
Randomly crop an image to 'size'
RandomCrop(size, ...)
RandomCrop(size, ...)
size |
size |
... |
additional arguments |
None
Randomly selects a rectangle region in an image and randomizes its pixels.
RandomErasing(p = 0.5, sl = 0, sh = 0.3, min_aspect = 0.3, max_count = 1)
RandomErasing(p = 0.5, sl = 0, sh = 0.3, min_aspect = 0.3, max_count = 1)
p |
probability |
sl |
sl |
sh |
sh |
min_aspect |
minimum aspect |
max_count |
maximum count |
None
Picks a random scaled crop of an image and resize it to 'size'
RandomResizedCrop( size, min_scale = 0.08, ratio = list(0.75, 1.33333333333333), resamples = list(2, 0), val_xtra = 0.14 )
RandomResizedCrop( size, min_scale = 0.08, ratio = list(0.75, 1.33333333333333), resamples = list(2, 0), val_xtra = 0.14 )
size |
size |
min_scale |
minimum scale |
ratio |
ratio |
resamples |
resamples |
val_xtra |
validation xtra |
None
Picks a random scaled crop of an image and resize it to 'size'
RandomResizedCropGPU( size, min_scale = 0.08, ratio = list(0.75, 1.33333333333333), mode = "bilinear", valid_scale = 1 )
RandomResizedCropGPU( size, min_scale = 0.08, ratio = list(0.75, 1.33333333333333), mode = "bilinear", valid_scale = 1 )
size |
size |
min_scale |
minimum scale |
ratio |
ratio |
mode |
mode |
valid_scale |
validation scale |
None
Create function that splits 'items' between train/val with 'valid_pct' randomly.
RandomSplitter(valid_pct = 0.2, seed = NULL)
RandomSplitter(valid_pct = 0.2, seed = NULL)
valid_pct |
validation percenatge split |
seed |
random seed |
None
a random image from domain B, resulting in a random pair of images from domain A and B.
RandPair(itemsB)
RandPair(itemsB)
itemsB |
a random image from domain B |
None
A transform that before_call its state at each '__call__'
RandTransform(p = 1, nm = NULL, before_call = NULL, ...)
RandTransform(p = 1, nm = NULL, before_call = NULL, ...)
p |
probability |
nm |
nm |
before_call |
before call |
... |
additional arguments to pass |
None
Convenience method for 'Lookahead' with 'RAdam'
ranger( p, lr, mom = 0.95, wd = 0.01, eps = 1e-06, sqr_mom = 0.99, beta = 0, decouple_wd = TRUE )
ranger( p, lr, mom = 0.95, wd = 0.01, eps = 1e-06, sqr_mom = 0.99, beta = 0, decouple_wd = TRUE )
p |
p |
lr |
learning rate |
mom |
momentum |
wd |
weight decay |
eps |
epsilon |
sqr_mom |
sqr momentum |
beta |
beta |
decouple_wd |
decouple weight decay |
None
Resizes the biggest dimension of an image to 'max_sz' maintaining the aspect ratio
RatioResize(max_sz, resamples = list(2, 0), ...)
RatioResize(max_sz, resamples = list(2, 0), ...)
max_sz |
maximum sz |
resamples |
resamples |
... |
additional arguments |
None
A transform that always take lists as items
ReadTSBatch(to)
ReadTSBatch(to)
to |
output from TSDataTable function |
None
Recall for single-label classification problems
Recall( axis = -1, labels = NULL, pos_label = 1, average = "binary", sample_weight = NULL )
Recall( axis = -1, labels = NULL, pos_label = 1, average = "binary", sample_weight = NULL )
axis |
axis |
labels |
labels |
pos_label |
pos_label |
average |
average |
sample_weight |
sample_weight |
None
Recall for multi-label classification problems
RecallMulti( thresh = 0.5, sigmoid = TRUE, labels = NULL, pos_label = 1, average = "macro", sample_weight = NULL )
RecallMulti( thresh = 0.5, sigmoid = TRUE, labels = NULL, pos_label = 1, average = "macro", sample_weight = NULL )
thresh |
thresh |
sigmoid |
sigmoid |
labels |
labels |
pos_label |
pos_label |
average |
average |
sample_weight |
sample_weight |
None
ReduceLROnPlateau
ReduceLROnPlateau(...)
ReduceLROnPlateau(...)
... |
parameters to pass |
None
## Not run: URLs_MNIST_SAMPLE() # transformations tfms = aug_transforms(do_flip = FALSE) path = 'mnist_sample' bs = 20 #load into memory data = ImageDataLoaders_from_folder(path, batch_tfms = tfms, size = 26, bs = bs) learn = cnn_learner(data, resnet18(), metrics = accuracy, path = getwd()) learn %>% fit_one_cycle(10, 1e-2, cbs = ReduceLROnPlateau(monitor='valid_loss', patience = 1)) ## End(Not run)
## Not run: URLs_MNIST_SAMPLE() # transformations tfms = aug_transforms(do_flip = FALSE) path = 'mnist_sample' bs = 20 #load into memory data = ImageDataLoaders_from_folder(path, batch_tfms = tfms, size = 26, bs = bs) learn = cnn_learner(data, resnet18(), metrics = accuracy, path = getwd()) learn %>% fit_one_cycle(10, 1e-2, cbs = ReduceLROnPlateau(monitor='valid_loss', patience = 1)) ## End(Not run)
'TransformBlock' for float targets
RegressionBlock(n_out = NULL)
RegressionBlock(n_out = NULL)
n_out |
number of out features |
Block object
Split signal at points of silence greater than 2*pad_ms
RemoveSilence( remove_type = RemoveType()$Trim$value, threshold = 20, pad_ms = 20 )
RemoveSilence( remove_type = RemoveType()$Trim$value, threshold = 20, pad_ms = 20 )
remove_type |
remove type from RemoveType module |
threshold |
threshold point |
pad_ms |
pad milliseconds |
None
Replace tokens in ALL CAPS by their lower version and add 'TK_UP' before.
replace_all_caps(t)
replace_all_caps(t)
t |
text |
string
Replace tokens in ALL CAPS by their lower version and add 'TK_UP' before.
replace_maj(t)
replace_maj(t)
t |
text |
string
Replace repetitions at the character level: cccc – TK_REP 4 c
replace_rep(t)
replace_rep(t)
t |
text |
string
Replace word repetitions: word word word word – TK_WREP 4 word
replace_wrep(t)
replace_wrep(t)
t |
text |
string
Resnet block as described in the paper.
res_block_1d(nf, ks = c(5, 3))
res_block_1d(nf, ks = c(5, 3))
nf |
number of features |
ks |
kernel size |
block
Resample using faster polyphase technique and avoiding FFT computation
Resample(sr_new)
Resample(sr_new)
sr_new |
input |
None
Resnet block from 'ni' to 'nh' with 'stride'
ResBlock( expansion, ni, nf, stride = 1, groups = 1, reduction = NULL, nh1 = NULL, nh2 = NULL, dw = FALSE, g2 = 1, sa = FALSE, sym = FALSE, norm_type = 1, act_cls = nn$ReLU, ndim = 2, ks = 3, pool = AvgPool(), pool_first = TRUE, padding = NULL, bias = NULL, bn_1st = TRUE, transpose = FALSE, init = "auto", xtra = NULL, bias_std = 0.01, dilation = 1, padding_mode = "zeros" )
ResBlock( expansion, ni, nf, stride = 1, groups = 1, reduction = NULL, nh1 = NULL, nh2 = NULL, dw = FALSE, g2 = 1, sa = FALSE, sym = FALSE, norm_type = 1, act_cls = nn$ReLU, ndim = 2, ks = 3, pool = AvgPool(), pool_first = TRUE, padding = NULL, bias = NULL, bn_1st = TRUE, transpose = FALSE, init = "auto", xtra = NULL, bias_std = 0.01, dilation = 1, padding_mode = "zeros" )
expansion |
decoder |
ni |
number of linear inputs |
nf |
number of features |
stride |
stride number |
groups |
groups number |
reduction |
reduction |
nh1 |
out channels 1 |
nh2 |
out channels 2 |
dw |
dw paramer |
g2 |
g2 block |
sa |
sa parameter |
sym |
symmetric |
norm_type |
normalization type |
act_cls |
activation |
ndim |
dimension number |
ks |
kernel size |
pool |
pooling type, Average, Max |
pool_first |
pooling first |
padding |
padding |
bias |
bias |
bn_1st |
batch normalization 1st |
transpose |
transpose |
init |
initializer |
xtra |
xtra |
bias_std |
bias standard deviation |
dilation |
dilation number |
padding_mode |
padding mode |
Block object
resize x to (w,h)
reshape(x, h, w, resample = 0)
reshape(x, h, w, resample = 0)
x |
tensor |
h |
height |
w |
width |
resample |
resample value |
None
A transform that before_call its state at each '__call__'
Resize(size, method = "crop", pad_mode = "reflection", resamples = list(2, 0))
Resize(size, method = "crop", pad_mode = "reflection", resamples = list(2, 0))
size |
size of image |
method |
method |
pad_mode |
reflection, zeros, border as string parameter |
resamples |
list of integers |
None
'resize' 'x' to 'max_px', or 'max_h', or 'max_w'
resize_max(img, resample = 0, max_px = NULL, max_h = NULL, max_w = NULL)
resize_max(img, resample = 0, max_px = NULL, max_h = NULL, max_w = NULL)
img |
image |
resample |
resample value |
max_px |
max px |
max_h |
max height |
max_w |
max width |
None
Reshape x to size, keeping batch dim the same size
ResizeBatch(...)
ResizeBatch(...)
... |
parameters to pass |
None
Crops signal to be length specified in ms by duration, padding if needed
ResizeSignal(duration, pad_mode = AudioPadType()$Zeros)
ResizeSignal(duration, pad_mode = AudioPadType()$Zeros)
duration |
int, duration |
pad_mode |
padding mode |
None
Base class for all neural network modules.
ResNet( block, layers, num_classes = 1000, zero_init_residual = FALSE, groups = 1, width_per_group = 64, replace_stride_with_dilation = NULL, norm_layer = NULL )
ResNet( block, layers, num_classes = 1000, zero_init_residual = FALSE, groups = 1, width_per_group = 64, replace_stride_with_dilation = NULL, norm_layer = NULL )
block |
the blocks that need to passed to ResNet |
layers |
the layers to pass to ResNet |
num_classes |
the number of classes |
zero_init_residual |
logical, initializer |
groups |
the groups |
width_per_group |
the width per group |
replace_stride_with_dilation |
logical, replace stride with dilation |
norm_layer |
norm_layer |
Resnet_generator
resnet_generator( ch_in, ch_out, n_ftrs = 64, norm_layer = NULL, dropout = 0, n_blocks = 9, pad_mode = "reflection" )
resnet_generator( ch_in, ch_out, n_ftrs = 64, norm_layer = NULL, dropout = 0, n_blocks = 9, pad_mode = "reflection" )
ch_in |
input |
ch_out |
output |
n_ftrs |
filter |
norm_layer |
normalziation layer |
dropout |
dropout rate |
n_blocks |
number of blocks |
pad_mode |
paddoing mode |
None
ResNet-101 model from
resnet101(pretrained = FALSE, progress)
resnet101(pretrained = FALSE, progress)
pretrained |
pretrained or not |
progress |
to see progress bar or not |
"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>
model
Resnet152
resnet152(pretrained = FALSE, progress)
resnet152(pretrained = FALSE, progress)
pretrained |
pretrained or not |
progress |
to see progress bar or not |
"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>
model
Resnet18
resnet18(pretrained = FALSE, progress)
resnet18(pretrained = FALSE, progress)
pretrained |
pretrained or not |
progress |
to see progress bar or not |
"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>
model
ResNet-34 model from
resnet34(pretrained = FALSE, progress)
resnet34(pretrained = FALSE, progress)
pretrained |
pretrained or not |
progress |
to see progress bar or not |
"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>
model
Resnet50
resnet50(pretrained = FALSE, progress)
resnet50(pretrained = FALSE, progress)
pretrained |
pretrained or not |
progress |
to see progress bar or not |
"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>
model
nn()$Module for the ResNet Block
ResnetBlock( dim, pad_mode = "reflection", norm_layer = NULL, dropout = 0, bias = TRUE )
ResnetBlock( dim, pad_mode = "reflection", norm_layer = NULL, dropout = 0, bias = TRUE )
dim |
dimension |
pad_mode |
padding mode |
norm_layer |
normalization layer |
dropout |
dropout rate |
bias |
bias or not |
None
Implements RetinaNet from https://arxiv.org/abs/1708.02002
RetinaNet(...)
RetinaNet(...)
... |
arguments to pass |
model
## Not run: encoder = create_body(resnet34(), pretrained = TRUE) arch = RetinaNet(encoder, get_c(dls), final_bias=-4) ## End(Not run)
## Not run: encoder = create_body(resnet34(), pretrained = TRUE) arch = RetinaNet(encoder, get_c(dls), final_bias=-4) ## End(Not run)
Base class for all neural network modules.
RetinaNetFocalLoss(...)
RetinaNetFocalLoss(...)
... |
parameters to pass |
Your models should also subclass this class. Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:: import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x)) Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:'to', etc.
None
Reverse_text
reverse_text(x)
reverse_text(x)
x |
text |
string
Converts a RGB image to an HSV image.
rgb2hsv(img)
rgb2hsv(img)
img |
image object |
Note: Will not work on logit space images.
None
Remove multiple spaces
rm_useless_spaces(t)
rm_useless_spaces(t)
t |
text |
string
## Not run: rm_useless_spaces('hello, Sir!') ## End(Not run)
## Not run: rm_useless_spaces('hello, Sir!') ## End(Not run)
Step for SGD with momentum with 'lr'
rms_prop_step(p, lr, sqr_avg, eps, grad_avg = NULL, ...)
rms_prop_step(p, lr, sqr_avg, eps, grad_avg = NULL, ...)
p |
p |
lr |
learning rate |
sqr_avg |
sqr average |
eps |
epsilon |
grad_avg |
grad average |
... |
additional arguments to pass |
None
Root mean squared error
rmse(preds, targs)
rmse(preds, targs)
preds |
predictions |
targs |
targets |
None
## Not run: model = dls %>% tabular_learner(layers=c(200,100,100,200), metrics = list(mse(),rmse()) ) ## End(Not run)
## Not run: model = dls %>% tabular_learner(layers=c(200,100,100,200), metrics = list(mse(),rmse()) ) ## End(Not run)
RMSProp
RMSProp(...)
RMSProp(...)
... |
parameters to pass |
None
Dropout with probability 'p' that is consistent on the seq_len dimension.
RNNDropout(p = 0.5)
RNNDropout(p = 0.5)
p |
p |
None
'Callback' that adds AR and TAR regularization in RNN training
RNNRegularizer(alpha = 0, beta = 0)
RNNRegularizer(alpha = 0, beta = 0)
alpha |
alpha |
beta |
beta |
None
Area Under the Receiver Operating Characteristic Curve for single-label multiclass classification problems
RocAuc( axis = -1, average = "macro", sample_weight = NULL, max_fpr = NULL, multi_class = "ovr" )
RocAuc( axis = -1, average = "macro", sample_weight = NULL, max_fpr = NULL, multi_class = "ovr" )
axis |
axis |
average |
average |
sample_weight |
sample_weight |
max_fpr |
max_fpr |
multi_class |
multi_class |
None
Area Under the Receiver Operating Characteristic Curve for single-label binary classification problems
RocAucBinary( axis = -1, average = "macro", sample_weight = NULL, max_fpr = NULL, multi_class = "raise" )
RocAucBinary( axis = -1, average = "macro", sample_weight = NULL, max_fpr = NULL, multi_class = "raise" )
axis |
axis |
average |
average |
sample_weight |
sample_weight |
max_fpr |
max_fpr |
multi_class |
multi_class |
None
## Not run: model = dls %>% tabular_learner(layers=c(200,100,100,200), config = tabular_config(embed_p = 0.3, use_bn = FALSE), metrics = list(accuracy, RocAucBinary(), Precision(), Recall(), F1Score())) ## End(Not run)
## Not run: model = dls %>% tabular_learner(layers=c(200,100,100,200), config = tabular_config(embed_p = 0.3, use_bn = FALSE), metrics = list(accuracy, RocAucBinary(), Precision(), Recall(), F1Score())) ## End(Not run)
Area Under the Receiver Operating Characteristic Curve for multi-label binary classification problems
RocAucMulti( sigmoid = TRUE, average = "macro", sample_weight = NULL, max_fpr = NULL )
RocAucMulti( sigmoid = TRUE, average = "macro", sample_weight = NULL, max_fpr = NULL )
sigmoid |
sigmoid |
average |
average |
sample_weight |
sample_weight |
max_fpr |
max_fpr |
None
Apply a random rotation of at most 'max_deg' with probability 'p' to a batch of images
Rotate( max_deg = 10, p = 0.5, draw = NULL, size = NULL, mode = "bilinear", pad_mode = "reflection", align_corners = TRUE, batch = FALSE )
Rotate( max_deg = 10, p = 0.5, draw = NULL, size = NULL, mode = "bilinear", pad_mode = "reflection", align_corners = TRUE, batch = FALSE )
max_deg |
maximum degrees |
p |
probability |
draw |
draw |
size |
size of image |
mode |
mode |
pad_mode |
reflection, zeros, border as string parameter |
align_corners |
align corners or not |
batch |
batch or not |
None
Return a random rotation matrix with 'max_deg' and 'p'
rotate_mat(x, max_deg = 10, p = 0.5, draw = NULL, batch = FALSE)
rotate_mat(x, max_deg = 10, p = 0.5, draw = NULL, batch = FALSE)
x |
tensor |
max_deg |
max_deg |
p |
probability |
draw |
draw |
batch |
batch |
None
Round
## S3 method for class 'torch.Tensor' round(x, digits = 0)
## S3 method for class 'torch.Tensor' round(x, digits = 0)
x |
tensor |
digits |
decimal |
tensor
Round
## S3 method for class 'fastai.torch_core.TensorMask' round(x, digits = 0)
## S3 method for class 'fastai.torch_core.TensorMask' round(x, digits = 0)
x |
tensor |
digits |
decimal |
tensor
Apply change in saturation of 'max_lighting' to batch of images with probability 'p'.
Saturation(max_lighting = 0.2, p = 0.75, draw = NULL, batch = FALSE)
Saturation(max_lighting = 0.2, p = 0.75, draw = NULL, batch = FALSE)
max_lighting |
maximum lighting |
p |
probability |
draw |
draw |
batch |
batch |
None
SaveModelCallback
SaveModelCallback(...)
SaveModelCallback(...)
... |
parameters to pass |
None
Cosine schedule function from 'start' to 'end'
SchedCos(start, end)
SchedCos(start, end)
start |
start |
end |
end |
None
Exponential schedule function from 'start' to 'end'
SchedExp(start, end)
SchedExp(start, end)
start |
start |
end |
end |
None
Linear schedule function from 'start' to 'end'
SchedLin(start, end)
SchedLin(start, end)
start |
start |
end |
end |
None
Constant schedule function with 'start' value
SchedNo(start, end)
SchedNo(start, end)
start |
start |
end |
end |
None
Polynomial schedule (of 'power') function from 'start' to 'end'
SchedPoly(start, end, power)
SchedPoly(start, end, power)
start |
start |
end |
end |
power |
power |
None
SEBlock
SEBlock(expansion, ni, nf, groups = 1, reduction = 16, stride = 1)
SEBlock(expansion, ni, nf, groups = 1, reduction = 16, stride = 1)
expansion |
decoder |
ni |
number of inputs |
nf |
number of features |
groups |
number of groups |
reduction |
number of reduction |
stride |
number of strides |
Block object
Create from list of 'fnames' in 'path's with 'label_func'.
SegmentationDataLoaders_from_label_func( path, fnames, label_func, valid_pct = 0.2, seed = NULL, codes = NULL, item_tfms = NULL, batch_tfms = NULL, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL )
SegmentationDataLoaders_from_label_func( path, fnames, label_func, valid_pct = 0.2, seed = NULL, codes = NULL, item_tfms = NULL, batch_tfms = NULL, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL )
path |
path |
fnames |
file names |
label_func |
label function |
valid_pct |
validation percentage |
seed |
seed |
codes |
codes |
item_tfms |
item transformations |
batch_tfms |
batch transformations |
bs |
batch size |
val_bs |
validation batch size |
shuffle_train |
shuffle train |
device |
device name |
None
Self attention layer for 'n_channels'.
SelfAttention(n_channels)
SelfAttention(n_channels)
n_channels |
number of channels |
None
SEModule
SEModule(ch, reduction, act_cls = nn()$ReLU)
SEModule(ch, reduction, act_cls = nn()$ReLU)
ch |
ch |
reduction |
reduction |
act_cls |
activation |
None
Create an encoder over 'module' that can process a full sentence.
SentenceEncoder(bptt, module, pad_idx = 1, max_len = NULL)
SentenceEncoder(bptt, module, pad_idx = 1, max_len = NULL)
bptt |
bptt |
module |
module |
pad_idx |
pad_idx |
max_len |
max_len |
None
SentencePiece tokenizer for 'lang'
SentencePieceTokenizer( lang = "en", special_toks = NULL, sp_model = NULL, vocab_sz = NULL, max_vocab_sz = 30000, model_type = "unigram", char_coverage = NULL, cache_dir = "tmp" )
SentencePieceTokenizer( lang = "en", special_toks = NULL, sp_model = NULL, vocab_sz = NULL, max_vocab_sz = 30000, model_type = "unigram", char_coverage = NULL, cache_dir = "tmp" )
lang |
lang |
special_toks |
special_toks |
sp_model |
sp_model |
vocab_sz |
vocab_sz |
max_vocab_sz |
max_vocab_sz |
model_type |
model_type |
char_coverage |
char_coverage |
cache_dir |
cache_dir |
None
SeparableBlock
SeparableBlock(expansion, ni, nf, reduction = 16, stride = 1, base_width = 4)
SeparableBlock(expansion, ni, nf, reduction = 16, stride = 1, base_width = 4)
expansion |
decoder |
ni |
number of inputs |
nf |
number of features |
reduction |
number of reduction |
stride |
number of stride |
base_width |
base width |
Block object
Sequential
sequential(...)
sequential(...)
... |
parameters to pass |
None
SequentialEx
SequentialEx(...)
SequentialEx(...)
... |
parameters to pass |
None
Sequential RNN
SequentialRNN(...)
SequentialRNN(...)
... |
parameters to pass |
layer
SEResNeXtBlock
SEResNeXtBlock( expansion, ni, nf, groups = 32, reduction = 16, stride = 1, base_width = 4 )
SEResNeXtBlock( expansion, ni, nf, groups = 32, reduction = 16, stride = 1, base_width = 4 )
expansion |
decoder |
ni |
number of linear inputs |
nf |
number of features |
groups |
groups number |
reduction |
reduction number |
stride |
stride number |
base_width |
int, base width |
Block object
Set freeze model
set_freeze_model(m, rg)
set_freeze_model(m, rg)
m |
parameters |
rg |
rg |
None
Set_item_pg
set_item_pg(pg, k, v)
set_item_pg(pg, k, v)
pg |
pg |
k |
k |
v |
v |
None
Go through 'tfms' and combines together affine/coord or lighting transforms
setup_aug_tfms(tfms)
setup_aug_tfms(tfms)
tfms |
transformations |
None
Sgd_step
sgd_step(p, lr, ...)
sgd_step(p, lr, ...)
p |
p |
lr |
learning rate |
... |
additional arguments to pass |
None
## Not run: tst_param = function(val, grad = NULL) { "Create a tensor with `val` and a gradient of `grad` for testing" res = tensor(val) %>% float() if(is.null(grad)) { grad = tensor(val / 10) } else { grad = tensor(grad) } res$grad = grad %>% float() res } p = tst_param(1., 0.1) sgd_step(p, 1.) ## End(Not run)
## Not run: tst_param = function(val, grad = NULL) { "Create a tensor with `val` and a gradient of `grad` for testing" res = tensor(val) %>% float() if(is.null(grad)) { grad = tensor(val / 10) } else { grad = tensor(grad) } res$grad = grad %>% float() res } p = tst_param(1., 0.1) sgd_step(p, 1.) ## End(Not run)
Shifts spectrogram along x-axis wrapping around to other side
SGRoll(max_shift_pct = 0.5, direction = 0)
SGRoll(max_shift_pct = 0.5, direction = 0)
max_shift_pct |
maximum shift percentage |
direction |
direction |
None
Base interpereter to use the 'SHAP' interpretation library
ShapInterpretation( learn, test_data = NULL, link = "identity", l1_reg = "auto", n_samples = 128 )
ShapInterpretation( learn, test_data = NULL, link = "identity", l1_reg = "auto", n_samples = 128 )
learn |
learner/model |
test_data |
should be either a Pandas dataframe or a TabularDataLoader. If not, 100 random rows of the training data will be used instead. |
link |
link can either be "identity" or "logit". A generalized linear model link to connect the feature importance values to the model output. Since the feature importance values, phi, sum up to the model output, it often makes sense to connect them to the ouput with a link function where link(outout) = sum(phi). If the model output is a probability then the LogitLink link function makes the feature importance values have log-odds units. |
l1_reg |
can be an integer value representing the number of features, "auto", "aic", "bic", or a float value. The l1 regularization to use for feature selection (the estimation procedure is based on a debiased lasso). The auto option currently uses "aic" when less that 20 space is enumerated, otherwise it uses no regularization. |
n_samples |
can either be "auto" or an integer value. This is the number of times to re-evaluate the model when explaining each predictions. More samples leads to lower variance estimations of the SHAP values |
None
Merge a shortcut with the result of the module by adding them. Adds Conv, BN and ReLU
Shortcut(ni, nf, act_fn = nn$ReLU(inplace = TRUE))
Shortcut(ni, nf, act_fn = nn$ReLU(inplace = TRUE))
ni |
number of input channels |
nf |
number of features |
act_fn |
activation |
None
Fit just 'pct' of an epoch, then stop
ShortEpochCallback(pct = 0.01, short_valid = TRUE)
ShortEpochCallback(pct = 0.01, short_valid = TRUE)
pct |
percentage |
short_valid |
short_valid or not |
None
Adds functionality to view dicom images where each file may have more than 1 frame
show(img, frames = 1, scale = TRUE, ...)
show(img, frames = 1, scale = TRUE, ...)
img |
image object |
frames |
number of frames |
scale |
scale |
... |
additional arguments |
None
Show an array on 'ax'.
show_array( array, ax = NULL, figsize = NULL, title = NULL, ctx = NULL, tx = NULL )
show_array( array, ax = NULL, figsize = NULL, title = NULL, ctx = NULL, tx = NULL )
array |
R array |
ax |
axis |
figsize |
figure size |
title |
title, text |
ctx |
ctx |
tx |
tx |
None
## Not run: arr = as.array(1:10) show_array(arr,title = 'My R array') %>% plot(dpi = 200) ## End(Not run)
## Not run: arr = as.array(1:10) show_array(arr,title = 'My R array') %>% plot(dpi = 200) ## End(Not run)
Show_batch
show_batch( dls, b = NULL, max_n = 9, ctxs = NULL, figsize = c(6, 6), show = TRUE, unique = FALSE, dpi = 120, ... )
show_batch( dls, b = NULL, max_n = 9, ctxs = NULL, figsize = c(6, 6), show = TRUE, unique = FALSE, dpi = 120, ... )
dls |
dataloader object |
b |
defaults to one_batch |
max_n |
maximum images |
ctxs |
ctxs parameter |
figsize |
figure size |
show |
show or not |
unique |
unique images |
dpi |
dots per inch |
... |
additional arguments to pass |
None
## Not run: dls %>% show_batch() ## End(Not run)
## Not run: dls %>% show_batch() ## End(Not run)
Show a PIL or PyTorch image on 'ax'.
show_image( im, ax = NULL, figsize = NULL, title = NULL, ctx = NULL, cmap = NULL, norm = NULL, aspect = NULL, interpolation = NULL, alpha = NULL, vmin = NULL, vmax = NULL, origin = NULL, extent = NULL )
show_image( im, ax = NULL, figsize = NULL, title = NULL, ctx = NULL, cmap = NULL, norm = NULL, aspect = NULL, interpolation = NULL, alpha = NULL, vmin = NULL, vmax = NULL, origin = NULL, extent = NULL )
im |
im |
ax |
axis |
figsize |
figure size |
title |
title |
ctx |
ctx |
cmap |
color maps |
norm |
normalization |
aspect |
aspect |
interpolation |
interpolation |
alpha |
alpha value |
vmin |
value min |
vmax |
value max |
origin |
origin |
extent |
extent |
Show all images 'ims' as subplots with 'rows' using 'titles'
show_images( ims, nrows = 1, ncols = NULL, titles = NULL, figsize = NULL, imsize = 3, add_vert = 0 )
show_images( ims, nrows = 1, ncols = NULL, titles = NULL, figsize = NULL, imsize = 3, add_vert = 0 )
ims |
images |
nrows |
number of rows |
ncols |
number of columns |
titles |
titles |
figsize |
figure size |
imsize |
image size |
add_vert |
add vertical |
None
Show_preds
show_preds( predictions, idx, class_map = NULL, denormalize_fn = denormalize_imagenet(), display_label = TRUE, display_bbox = TRUE, display_mask = TRUE, ncols = 1, figsize = NULL, show = FALSE, dpi = 100 )
show_preds( predictions, idx, class_map = NULL, denormalize_fn = denormalize_imagenet(), display_label = TRUE, display_bbox = TRUE, display_mask = TRUE, ncols = 1, figsize = NULL, show = FALSE, dpi = 100 )
predictions |
provide list of raw predictions |
idx |
image indices |
class_map |
class_map |
denormalize_fn |
denormalize_fn |
display_label |
display_label |
display_bbox |
display_bbox |
display_mask |
display_mask |
ncols |
ncols |
figsize |
figsize |
show |
show |
dpi |
dots per inch |
None
Show some predictions on 'ds_idx'-th dataset or 'dl'
show_results( object, ds_idx = 1, dl = NULL, max_n = 9, shuffle = TRUE, dpi = 90, ... )
show_results( object, ds_idx = 1, dl = NULL, max_n = 9, shuffle = TRUE, dpi = 90, ... )
object |
model |
ds_idx |
ds by index |
dl |
dataloader |
max_n |
maximum number of images |
shuffle |
shuffle or not |
dpi |
dots per inch |
... |
additional arguments |
None
Show_samples
show_samples( dls, idx, class_map = NULL, denormalize_fn = denormalize_imagenet(), display_label = TRUE, display_bbox = TRUE, display_mask = TRUE, ncols = 1, figsize = NULL, show = FALSE, dpi = 100 )
show_samples( dls, idx, class_map = NULL, denormalize_fn = denormalize_imagenet(), display_label = TRUE, display_bbox = TRUE, display_mask = TRUE, ncols = 1, figsize = NULL, show = FALSE, dpi = 100 )
dls |
dataloader |
idx |
image indices |
class_map |
class_map |
denormalize_fn |
denormalize_fn |
display_label |
display_label |
display_bbox |
display_bbox |
display_mask |
display_mask |
ncols |
ncols |
figsize |
figsize |
show |
show |
dpi |
dots per inch |
None
Update the progress bar with input and prediction images
ShowCycleGANImgsCallback(imgA = FALSE, imgB = TRUE, show_img_interval = 10)
ShowCycleGANImgsCallback(imgA = FALSE, imgB = TRUE, show_img_interval = 10)
imgA |
img from A domain |
imgB |
img from B domain |
show_img_interval |
show image interval |
None
ShowGraphCallback
ShowGraphCallback(...)
ShowGraphCallback(...)
... |
parameters to pass |
None
Same as 'torch$sigmoid', plus clamping to '(eps,1-eps)
sigmoid(input, eps = 1e-07)
sigmoid(input, eps = 1e-07)
input |
inputs |
eps |
epsilon |
None
Same as 'torch$sigmoid_', plus clamping to '(eps,1-eps)
sigmoid_(input, eps = 1e-07)
sigmoid_(input, eps = 1e-07)
input |
input |
eps |
eps |
None
Sigmoid function with range '(low, high)'
sigmoid_range(x, low, high)
sigmoid_range(x, low, high)
x |
tensor |
low |
low value |
high |
high value |
None
Sigmoid module with range '(low, high)'
SigmoidRange(low, high)
SigmoidRange(low, high)
low |
low value |
high |
high value |
None
Randomly zeros some portion of the signal
SignalCutout(p = 0.5, max_cut_pct = 0.15)
SignalCutout(p = 0.5, max_cut_pct = 0.15)
p |
probability |
max_cut_pct |
max cut percentage |
None
Randomly loses some portion of the signal
SignalLoss(p = 0.5, max_loss_pct = 0.15)
SignalLoss(p = 0.5, max_loss_pct = 0.15)
p |
probability |
max_loss_pct |
max loss percentage |
None
Randomly shifts the audio signal by 'max_pct'
SignalShifter( p = 0.5, max_pct = 0.2, max_time = NULL, direction = 0, roll = FALSE )
SignalShifter( p = 0.5, max_pct = 0.2, max_time = NULL, direction = 0, roll = FALSE )
p |
probability |
max_pct |
max percentage |
max_time |
maximum time |
direction |
direction |
roll |
roll or not |
direction must be -1(left) 0(bidirectional) or 1(right).
None
Create a simple CNN with 'filters'.
SimpleCNN(filters, kernel_szs = NULL, strides = NULL, bn = TRUE)
SimpleCNN(filters, kernel_szs = NULL, strides = NULL, bn = TRUE)
filters |
filters number |
kernel_szs |
kernel size |
strides |
strides |
bn |
batch normalization |
None
Same as 'nn()$Module', but no need for subclasses to call 'super()$__init__'
SimpleSelfAttention(n_in, ks = 1, sym = FALSE)
SimpleSelfAttention(n_in, ks = 1, sym = FALSE)
n_in |
inputs |
ks |
kernel size |
sym |
sym |
None
Sin
## S3 method for class 'torch.Tensor' sin(x)
## S3 method for class 'torch.Tensor' sin(x)
x |
tensor |
tensor
Sin
## S3 method for class 'fastai.torch_core.TensorMask' sin(x)
## S3 method for class 'fastai.torch_core.TensorMask' sin(x)
x |
tensor |
tensor
Sinh
## S3 method for class 'fastai.torch_core.TensorMask' sinh(x)
## S3 method for class 'fastai.torch_core.TensorMask' sinh(x)
x |
tensor |
tensor
Convert 'func' from sklearn$metrics to a fastai metric
skm_to_fastai( func, is_class = TRUE, thresh = NULL, axis = -1, activation = NULL, ... )
skm_to_fastai( func, is_class = TRUE, thresh = NULL, axis = -1, activation = NULL, ... )
func |
function |
is_class |
is classification or not |
thresh |
threshold point |
axis |
axis |
activation |
activation |
... |
additional arguments to pass |
None
Slice
slice(...)
slice(...)
... |
additional arguments |
slice(start, stop[, step]) Create a slice object. This is used for extended slicing (e.g. a[0:10:2]).
sliced object
Sort
## S3 method for class 'torch.Tensor' sort(x, decreasing = FALSE, ...)
## S3 method for class 'torch.Tensor' sort(x, decreasing = FALSE, ...)
x |
tensor |
decreasing |
the order |
... |
additional parameters to pass |
Sort
## S3 method for class 'fastai.torch_core.TensorMask' sort(x, decreasing = FALSE, ...)
## S3 method for class 'fastai.torch_core.TensorMask' sort(x, decreasing = FALSE, ...)
x |
tensor |
decreasing |
the order |
... |
additional parameters to pass |
tensor
A 'DataLoader' that goes throught the item in the order given by 'sort_func'
SortedDL( dataset, sort_func = NULL, res = NULL, bs = 64, shuffle = FALSE, num_workers = NULL, verbose = FALSE, do_setup = TRUE, pin_memory = FALSE, timeout = 0, batch_size = NULL, drop_last = FALSE, indexed = NULL, n = NULL, device = NULL )
SortedDL( dataset, sort_func = NULL, res = NULL, bs = 64, shuffle = FALSE, num_workers = NULL, verbose = FALSE, do_setup = TRUE, pin_memory = FALSE, timeout = 0, batch_size = NULL, drop_last = FALSE, indexed = NULL, n = NULL, device = NULL )
dataset |
dataset |
sort_func |
sort_func |
res |
res |
bs |
bs |
shuffle |
shuffle |
num_workers |
num_workers |
verbose |
verbose |
do_setup |
do_setup |
pin_memory |
pin_memory |
timeout |
timeout |
batch_size |
batch_size |
drop_last |
drop_last |
indexed |
indexed |
n |
n |
device |
device |
None
Spacy tokenizer for 'lang'
SpacyTokenizer(lang = "en", special_toks = NULL, buf_sz = 5000)
SpacyTokenizer(lang = "en", special_toks = NULL, buf_sz = 5000)
lang |
language |
special_toks |
special tokenizers |
buf_sz |
buffer size |
none
Spearman correlation coefficient for regression problem
SpearmanCorrCoef( dim_argmax = NULL, axis = 0, nan_policy = "propagate", activation = "no", thresh = NULL, to_np = FALSE, invert_arg = FALSE, flatten = TRUE )
SpearmanCorrCoef( dim_argmax = NULL, axis = 0, nan_policy = "propagate", activation = "no", thresh = NULL, to_np = FALSE, invert_arg = FALSE, flatten = TRUE )
dim_argmax |
dim_argmax |
axis |
axis |
nan_policy |
nan_policy |
activation |
activation |
thresh |
thresh |
to_np |
to_np |
invert_arg |
invert_arg |
flatten |
flatten |
None
Add spaces around / and #
spec_add_spaces(t)
spec_add_spaces(t)
t |
text |
string
Creates a factory for creating AudioToSpec
SpectrogramTransformer(mel = TRUE, to_db = TRUE)
SpectrogramTransformer(mel = TRUE, to_db = TRUE)
mel |
mel-spectrogram or not |
to_db |
to decibels |
transforms with different parameters
None
Sqrt
## S3 method for class 'torch.Tensor' sqrt(x)
## S3 method for class 'torch.Tensor' sqrt(x)
x |
tensor |
tensor
Sqrt
## S3 method for class 'fastai.torch_core.TensorMask' sqrt(x)
## S3 method for class 'fastai.torch_core.TensorMask' sqrt(x)
x |
tensor |
tensor
Base class for all neural network modules.
SqueezeNet(version = "1_0", num_classes = 1000)
SqueezeNet(version = "1_0", num_classes = 1000)
version |
version of SqueezeNet |
num_classes |
the number of classes |
Your models should also subclass this class. Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:: import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x)) Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:'to', etc.
model
SqueezeNet model architecture from the '"SqueezeNet: AlexNet-level
squeezenet1_0(pretrained = FALSE, progress)
squeezenet1_0(pretrained = FALSE, progress)
pretrained |
pretrained or not |
progress |
to see progress bar or not |
accuracy with 50x fewer parameters and <0.5MB model size" <https://arxiv.org/abs/1602.07360>'_ paper.
model
SqueezeNet 1.1 model from the 'official SqueezeNet repo
squeezenet1_1(pretrained = FALSE, progress)
squeezenet1_1(pretrained = FALSE, progress)
pretrained |
pretrained or not |
progress |
to see progress bar or not |
<https://github.com/DeepScale/SqueezeNet/tree/master/SqueezeNet_v1.1>'_. SqueezeNet 1.1 has 2.4x less computation and slightly fewer parameters than SqueezeNet 1.0, without sacrificing accuracy.
model
Stack df_train and df_valid, adds 'valid_col'=TRUE/FALSE for df_valid/df_train
stack_train_valid(df_train, df_valid)
stack_train_valid(df_train, df_valid)
df_train |
train data |
df_valid |
validation data |
data frame
Register the number of steps done in 'state' for 'p'
step_stat(p, step = 0, ...)
step_stat(p, step = 0, ...)
p |
p |
step |
step |
... |
additional args to pass |
None
Sub
## S3 method for class 'torch.Tensor' a - b
## S3 method for class 'torch.Tensor' a - b
a |
tensor |
b |
tensor |
tensor
Sub
## S3 method for class 'fastai.torch_core.TensorMask' a - b
## S3 method for class 'fastai.torch_core.TensorMask' a - b
a |
tensor |
b |
tensor |
tensor
Subplots
subplots(nrows = 2, ncols = 2, figsize = NULL, imsize = 4)
subplots(nrows = 2, ncols = 2, figsize = NULL, imsize = 4)
nrows |
number of rows |
ncols |
number of columns |
figsize |
figure size |
imsize |
image size |
plot object
Custom param splitter for summarization models
summarization_splitter(m, arch)
summarization_splitter(m, arch)
m |
splitter parameter |
arch |
architecture |
None
Displays the SHAP values (which can be interpreted for feature importance)
summary_plot(object, dpi = 200, ...)
summary_plot(object, dpi = 200, ...)
object |
ShapInterpretation object |
dpi |
dots per inch |
... |
additional arguments |
None
Summary
## S3 method for class 'fastai.learner.Learner' summary(object, ...)
## S3 method for class 'fastai.learner.Learner' summary(object, ...)
object |
model |
... |
additional arguments to pass |
None
## Not run: summary(model) ## End(Not run)
## Not run: summary(model) ## End(Not run)
Print a summary of 'm' using a output text width of 'n' chars
## S3 method for class 'fastai.tabular.learner.TabularLearner' summary(object, ...)
## S3 method for class 'fastai.tabular.learner.TabularLearner' summary(object, ...)
object |
model |
... |
additional parameters to pass |
None
Swish
swish(x, inplace = FALSE)
swish(x, inplace = FALSE)
x |
tensor |
inplace |
inplace or not |
None
Same as nn()$Module, but no need for subclasses to call super()$__init__
Swish_(...)
Swish_(...)
... |
parameters to pass |
None
Convenience function to easily create a config for 'TabularModel'
tabular_config( ps = NULL, embed_p = 0, y_range = NULL, use_bn = TRUE, bn_final = FALSE, bn_cont = TRUE, act_cls = nn()$ReLU(inplace = TRUE) )
tabular_config( ps = NULL, embed_p = 0, y_range = NULL, use_bn = TRUE, bn_final = FALSE, bn_cont = TRUE, act_cls = nn()$ReLU(inplace = TRUE) )
ps |
ps |
embed_p |
embed proportion |
y_range |
y_range |
use_bn |
use batch normalization |
bn_final |
batch normalization final |
bn_cont |
batch normalization |
act_cls |
activation |
None
Get a 'Learner' using 'dls', with 'metrics', including a 'TabularModel' created using the remaining params.
tabular_learner( dls, layers = NULL, emb_szs = NULL, config = NULL, n_out = NULL, y_range = NULL, loss_func = NULL, opt_func = Adam(), lr = 0.001, splitter = trainable_params(), cbs = NULL, metrics = NULL, path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE, moms = list(0.95, 0.85, 0.95) )
tabular_learner( dls, layers = NULL, emb_szs = NULL, config = NULL, n_out = NULL, y_range = NULL, loss_func = NULL, opt_func = Adam(), lr = 0.001, splitter = trainable_params(), cbs = NULL, metrics = NULL, path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE, moms = list(0.95, 0.85, 0.95) )
dls |
It is a DataLoaders object. |
layers |
layers |
emb_szs |
emb_szs |
config |
config |
n_out |
n_out |
y_range |
y_range |
loss_func |
It can be any loss function you like. |
opt_func |
It will be used to create an optimizer when Learner.fit is called. |
lr |
It is learning rate. |
splitter |
It is a function that takes self.model and returns a list of parameter groups (or just one parameter group if there are no different parameter groups) |
cbs |
It is one or a list of Callbacks to pass to the Learner. |
metrics |
It is an optional list of metrics, that can be either functions or Metrics. |
path |
Ä°t is used to save and/or load models.Often path will be inferred from dls, but you can override it or pass a Path object to model_dir. Make sure you can write in path/model_dir! |
model_dir |
Ä°t is used to save and/or load models.Often path will be inferred from dls, but you can override it or pass a Path object to model_dir. Make sure you can write in path/model_dir! |
wd |
It is the default weight decay used when training the model. |
wd_bn_bias |
It controls if weight decay is applied to BatchNorm layers and bias. |
train_bn |
It controls if BatchNorm layers are trained even when they are supposed to be frozen according to the splitter. |
moms |
The default momentums used in Learner.fit_one_cycle. |
learner object
A 'Tabular' object with transforms
TabularDataTable( df, procs = NULL, cat_names = NULL, cont_names = NULL, y_names = NULL, y_block = NULL, splits = NULL, do_setup = TRUE, device = NULL, inplace = FALSE, reduce_memory = TRUE, ... )
TabularDataTable( df, procs = NULL, cat_names = NULL, cont_names = NULL, y_names = NULL, y_block = NULL, splits = NULL, do_setup = TRUE, device = NULL, inplace = FALSE, reduce_memory = TRUE, ... )
df |
A DataFrame of your data |
procs |
list of preprocess functions |
cat_names |
the names of the categorical variables |
cont_names |
the names of the continuous variables |
y_names |
the names of the dependent variables |
y_block |
the TransformBlock to use for the target |
splits |
How to split your data |
do_setup |
A parameter for if Tabular will run the data through the procs upon initialization |
device |
cuda or cpu |
inplace |
If True, Tabular will not keep a separate copy of your original DataFrame in memory |
reduce_memory |
fastai will attempt to reduce the overall memory usage |
... |
additional parameters to pass |
None
Basic model for tabular data.
TabularModel( emb_szs, n_cont, out_sz, layers, ps = NULL, embed_p = 0, y_range = NULL, use_bn = TRUE, bn_final = FALSE, bn_cont = TRUE, act_cls = nn()$ReLU(inplace = TRUE) )
TabularModel( emb_szs, n_cont, out_sz, layers, ps = NULL, embed_p = 0, y_range = NULL, use_bn = TRUE, bn_final = FALSE, bn_cont = TRUE, act_cls = nn()$ReLU(inplace = TRUE) )
emb_szs |
embedding size |
n_cont |
number of cont |
out_sz |
output size |
layers |
layers |
ps |
ps |
embed_p |
embed proportion |
y_range |
y range |
use_bn |
use batch normalization |
bn_final |
batch normalization final |
bn_cont |
batch normalization cont |
act_cls |
activation |
None
A 'DataFrame' wrapper that knows which cols are x/y, and returns rows in '__getitem__'
TabularTS( df, procs = NULL, x_names = NULL, y_names = NULL, block_y = NULL, splits = NULL, do_setup = TRUE, device = NULL, inplace = FALSE )
TabularTS( df, procs = NULL, x_names = NULL, y_names = NULL, block_y = NULL, splits = NULL, do_setup = TRUE, device = NULL, inplace = FALSE )
df |
A DataFrame of your data |
procs |
list of preprocess functions |
x_names |
predictors names |
y_names |
the names of the dependent variables |
block_y |
the TransformBlock to use for the target |
splits |
How to split your data |
do_setup |
A parameter for if Tabular will run the data through the procs upon initialization |
device |
device name |
inplace |
If True, Tabular will not keep a separate copy of your original DataFrame in memory |
None
Transformed 'DataLoader'
TabularTSDataloader( dataset, bs = 16, shuffle = FALSE, after_batch = NULL, num_workers = 0, verbose = FALSE, do_setup = TRUE, pin_memory = FALSE, timeout = 0, batch_size = NULL, drop_last = FALSE, indexed = NULL, n = NULL, device = NULL )
TabularTSDataloader( dataset, bs = 16, shuffle = FALSE, after_batch = NULL, num_workers = 0, verbose = FALSE, do_setup = TRUE, pin_memory = FALSE, timeout = 0, batch_size = NULL, drop_last = FALSE, indexed = NULL, n = NULL, device = NULL )
dataset |
data set |
bs |
batch size |
shuffle |
shuffle or not |
after_batch |
after batch |
num_workers |
the number of workers |
verbose |
verbose |
do_setup |
A parameter for if Tabular will run the data through the procs upon initialization |
pin_memory |
pin memory or not |
timeout |
timeout |
batch_size |
batch size |
drop_last |
drop last |
indexed |
indexed |
n |
n |
device |
device name |
None
Extract 'fname' to 'dest'/'fname.name' folder using 'tarfile'
tar_extract_at_filename(fname, dest)
tar_extract_at_filename(fname, dest)
fname |
folder name |
dest |
destination |
None
Like 'torch()$as_tensor', but handle lists too, and can pass multiple vector elements directly.
tensor(...)
tensor(...)
... |
image |
None
Basic type for a tensor of bounding boxes in an image
TensorBBox(x)
TensorBBox(x)
x |
tensor |
None
TensorBBox_create
TensorBBox_create(x, img_size = NULL)
TensorBBox_create(x, img_size = NULL)
x |
tensor |
img_size |
image size |
None
TensorImage
TensorImage(x)
TensorImage(x)
x |
tensor |
None
TensorImageBW
TensorImageBW(x)
TensorImageBW(x)
x |
tensor |
None
TensorMultiCategory
TensorMultiCategory(x)
TensorMultiCategory(x)
x |
tensor |
None
Basic type for points in an image
TensorPoint(x)
TensorPoint(x)
x |
tensor |
None
Delegates ('__call__','decode','setup') to ('encodes','decodes','setups') if 'split_idx' matches
TensorPoint_create(...)
TensorPoint_create(...)
... |
arguments to pass |
None
TerminateOnNaNCallback
TerminateOnNaNCallback(...)
TerminateOnNaNCallback(...)
... |
parameters to pass |
None
Data loader. Combines a dataset and a sampler, and provides an iterable over
test_loader()
test_loader()
the given dataset. The :class:'~torch.utils.data.DataLoader' supports both map-style and iterable-style datasets with single- or multi-process loading, customizing loading order and optional automatic batching (collation) and memory pinning. See :py:mod:'torch.utils.data' documentation page for more details.
loader
Create a 'Learner' with a text classifier from 'dls' and 'arch'.
text_classifier_learner( dls, arch, seq_len = 72, config = NULL, backwards = FALSE, pretrained = TRUE, drop_mult = 0.5, n_out = NULL, lin_ftrs = NULL, ps = NULL, max_len = 1440, y_range = NULL, loss_func = NULL, opt_func = Adam(), lr = 0.001, splitter = trainable_params, cbs = NULL, metrics = NULL, path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE, moms = list(0.95, 0.85, 0.95) )
text_classifier_learner( dls, arch, seq_len = 72, config = NULL, backwards = FALSE, pretrained = TRUE, drop_mult = 0.5, n_out = NULL, lin_ftrs = NULL, ps = NULL, max_len = 1440, y_range = NULL, loss_func = NULL, opt_func = Adam(), lr = 0.001, splitter = trainable_params, cbs = NULL, metrics = NULL, path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE, moms = list(0.95, 0.85, 0.95) )
dls |
dls |
arch |
arch |
seq_len |
seq_len |
config |
config |
backwards |
backwards |
pretrained |
pretrained |
drop_mult |
drop_mult |
n_out |
n_out |
lin_ftrs |
lin_ftrs |
ps |
ps |
max_len |
max_len |
y_range |
y_range |
loss_func |
loss_func |
opt_func |
opt_func |
lr |
lr |
splitter |
splitter |
cbs |
cbs |
metrics |
metrics |
path |
path |
model_dir |
model_dir |
wd |
wd |
wd_bn_bias |
wd_bn_bias |
train_bn |
train_bn |
moms |
moms |
None
A 'TransformBlock' for texts
TextBlock( tok_tfm, vocab = NULL, is_lm = FALSE, seq_len = 72, backwards = FALSE, min_freq = 3, max_vocab = 60000, special_toks = NULL, pad_tok = NULL )
TextBlock( tok_tfm, vocab = NULL, is_lm = FALSE, seq_len = 72, backwards = FALSE, min_freq = 3, max_vocab = 60000, special_toks = NULL, pad_tok = NULL )
tok_tfm |
tok_tfm |
vocab |
vocab |
is_lm |
is_lm |
seq_len |
seq_len |
backwards |
backwards |
min_freq |
min_freq |
max_vocab |
max_vocab |
special_toks |
special_toks |
pad_tok |
pad_tok |
block object
Build a 'TextBlock' from a dataframe using 'text_cols'
TextBlock_from_df( text_cols, vocab = NULL, is_lm = FALSE, seq_len = 72, backwards = FALSE, min_freq = 3, max_vocab = 60000, tok = NULL, rules = NULL, sep = " ", n_workers = 6, mark_fields = NULL, tok_text_col = "text" )
TextBlock_from_df( text_cols, vocab = NULL, is_lm = FALSE, seq_len = 72, backwards = FALSE, min_freq = 3, max_vocab = 60000, tok = NULL, rules = NULL, sep = " ", n_workers = 6, mark_fields = NULL, tok_text_col = "text" )
text_cols |
text columns |
vocab |
vocabulary |
is_lm |
is_lm |
seq_len |
sequence length |
backwards |
backwards |
min_freq |
minimum frequency |
max_vocab |
max vocabulary |
tok |
tokenizer |
rules |
rules |
sep |
separator |
n_workers |
number workers |
mark_fields |
mark_fields |
tok_text_col |
result column name |
None
Build a 'TextBlock' from a 'path'
TextBlock_from_folder( path, vocab = NULL, is_lm = FALSE, seq_len = 72, backwards = FALSE, min_freq = 3, max_vocab = 60000, tok = NULL, rules = NULL, extensions = NULL, folders = NULL, output_dir = NULL, skip_if_exists = TRUE, output_names = NULL, n_workers = 6, encoding = "utf8" )
TextBlock_from_folder( path, vocab = NULL, is_lm = FALSE, seq_len = 72, backwards = FALSE, min_freq = 3, max_vocab = 60000, tok = NULL, rules = NULL, extensions = NULL, folders = NULL, output_dir = NULL, skip_if_exists = TRUE, output_names = NULL, n_workers = 6, encoding = "utf8" )
path |
path |
vocab |
vocabualry |
is_lm |
is_lm |
seq_len |
sequence length |
backwards |
backwards |
min_freq |
minimum frequency |
max_vocab |
max vocabulary |
tok |
tokenizer |
rules |
rules |
extensions |
extensions |
folders |
folders |
output_dir |
output_dir |
skip_if_exists |
skip_if_exists |
output_names |
output_names |
n_workers |
number of workers |
encoding |
encoding |
None
Create from 'csv' file in 'path/csv_fname'
TextDataLoaders_from_csv( path, csv_fname = "labels.csv", header = "infer", delimiter = NULL, valid_pct = 0.2, seed = NULL, text_col = 0, label_col = 1, label_delim = NULL, y_block = NULL, text_vocab = NULL, is_lm = FALSE, valid_col = NULL, tok_tfm = NULL, seq_len = 72, backwards = FALSE, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL )
TextDataLoaders_from_csv( path, csv_fname = "labels.csv", header = "infer", delimiter = NULL, valid_pct = 0.2, seed = NULL, text_col = 0, label_col = 1, label_delim = NULL, y_block = NULL, text_vocab = NULL, is_lm = FALSE, valid_col = NULL, tok_tfm = NULL, seq_len = 72, backwards = FALSE, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL )
path |
path |
csv_fname |
csv file name |
header |
header |
delimiter |
delimiter |
valid_pct |
valid_ation percentage |
seed |
random seed |
text_col |
text column |
label_col |
label column |
label_delim |
label separator |
y_block |
y_block |
text_vocab |
text vocabulary |
is_lm |
is_lm |
valid_col |
valid column |
tok_tfm |
tok_tfm |
seq_len |
seq_len |
backwards |
backwards |
bs |
batch size |
val_bs |
validation batch size |
shuffle_train |
shuffle train data |
device |
device |
text loader
Create from 'df' in 'path' with 'valid_pct' '
TextDataLoaders_from_df( df, path = ".", valid_pct = 0.2, seed = NULL, text_col = 0, label_col = 1, label_delim = NULL, y_block = NULL, text_vocab = NULL, is_lm = FALSE, valid_col = NULL, tok_tfm = NULL, seq_len = 72, backwards = FALSE, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL )
TextDataLoaders_from_df( df, path = ".", valid_pct = 0.2, seed = NULL, text_col = 0, label_col = 1, label_delim = NULL, y_block = NULL, text_vocab = NULL, is_lm = FALSE, valid_col = NULL, tok_tfm = NULL, seq_len = 72, backwards = FALSE, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL )
df |
df |
path |
path |
valid_pct |
validation percentage |
seed |
seed |
text_col |
text_col |
label_col |
label_col |
label_delim |
label_delim |
y_block |
y_block |
text_vocab |
text_vocab |
is_lm |
is_lm |
valid_col |
valid_col |
tok_tfm |
tok_tfm |
seq_len |
seq_len |
backwards |
backwards |
bs |
batch size |
val_bs |
validation batch size, if not specified then val_bs is the same as bs. |
shuffle_train |
shuffle_train |
device |
device |
text loader
Create from imagenet style dataset in 'path' with 'train' and 'valid' subfolders (or provide 'valid_pct')
TextDataLoaders_from_folder( path, train = "train", valid = "valid", valid_pct = NULL, seed = NULL, vocab = NULL, text_vocab = NULL, is_lm = FALSE, tok_tfm = NULL, seq_len = 72, backwards = FALSE, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL )
TextDataLoaders_from_folder( path, train = "train", valid = "valid", valid_pct = NULL, seed = NULL, vocab = NULL, text_vocab = NULL, is_lm = FALSE, tok_tfm = NULL, seq_len = 72, backwards = FALSE, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL )
path |
path |
train |
train data |
valid |
validation data |
valid_pct |
validation percentage |
seed |
random seed |
vocab |
vocabulary |
text_vocab |
text_vocab |
is_lm |
is_lm |
tok_tfm |
tok_tfm |
seq_len |
seq_len |
backwards |
backwards |
bs |
batch size |
val_bs |
validation batch size |
shuffle_train |
shuffle train data |
device |
device |
text loader
Basic class for a 'Learner' in NLP.
TextLearner( dls, model, alpha = 2, beta = 1, moms = list(0.8, 0.7, 0.8), loss_func = NULL, opt_func = Adam(), lr = 0.001, splitter = trainable_params(), cbs = NULL, metrics = NULL, path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE )
TextLearner( dls, model, alpha = 2, beta = 1, moms = list(0.8, 0.7, 0.8), loss_func = NULL, opt_func = Adam(), lr = 0.001, splitter = trainable_params(), cbs = NULL, metrics = NULL, path = NULL, model_dir = "models", wd = NULL, wd_bn_bias = FALSE, train_bn = TRUE )
dls |
dls |
model |
model |
alpha |
alpha |
beta |
beta |
moms |
moms |
loss_func |
loss_func |
opt_func |
opt_func |
lr |
lr |
splitter |
splitter |
cbs |
cbs |
metrics |
metrics |
path |
path |
model_dir |
model_dir |
wd |
wd |
wd_bn_bias |
wd_bn_bias |
train_bn |
train_bn |
None
Load the encoder ‘file' from the model directory, optionally ensuring it’s on 'device'
TextLearner_load_encoder(file, device = NULL)
TextLearner_load_encoder(file, device = NULL)
file |
file |
device |
device |
None
Load a pretrained model and adapt it to the data vocabulary.
TextLearner_load_pretrained(wgts_fname, vocab_fname, model = NULL)
TextLearner_load_pretrained(wgts_fname, vocab_fname, model = NULL)
wgts_fname |
wgts_fname |
vocab_fname |
vocab_fname |
model |
model |
None
Save the encoder to 'file' in the model directory
TextLearner_save_encoder(file)
TextLearner_save_encoder(file)
file |
file |
None
Transformed 'DataLoader'
TfmdDL( dataset, bs = 64, shuffle = FALSE, num_workers = NULL, verbose = FALSE, do_setup = TRUE, pin_memory = FALSE, timeout = 0, batch_size = NULL, drop_last = FALSE, indexed = NULL, n = NULL, device = NULL, after_batch = NULL, ... )
TfmdDL( dataset, bs = 64, shuffle = FALSE, num_workers = NULL, verbose = FALSE, do_setup = TRUE, pin_memory = FALSE, timeout = 0, batch_size = NULL, drop_last = FALSE, indexed = NULL, n = NULL, device = NULL, after_batch = NULL, ... )
dataset |
dataset |
bs |
batch size |
shuffle |
shuffle |
num_workers |
number of workers |
verbose |
verbose |
do_setup |
do setup |
pin_memory |
pin memory |
timeout |
timeout |
batch_size |
batch size |
drop_last |
drop last |
indexed |
indexed |
n |
int, n |
device |
device |
after_batch |
after_batch |
... |
additional arguments to pass |
None
A 'Pipeline' of 'tfms' applied to a collection of 'items'
TfmdLists(...)
TfmdLists(...)
... |
parameters to pass |
Temporary fix to allow image resizing transform
TfmResize(size, interp_mode = "bilinear")
TfmResize(size, interp_mode = "bilinear")
size |
size |
interp_mode |
interpolation mode |
None
Build a convnet style learner from 'dls' and 'arch' using the 'timm' library
timm_learner(dls, arch, ...)
timm_learner(dls, arch, ...)
dls |
dataloader |
arch |
model architecture |
... |
additional arguments |
None
Timm models
timm_list_models(...)
timm_list_models(...)
... |
parameters to pass |
vector
Convert to bytes, default to PNG format
to_bytes_format(img, format = "png")
to_bytes_format(img, format = "png")
img |
image |
format |
format |
None
Convert a tensor or array to a PIL int8 Image
to_image(x)
to_image(x)
x |
tensor |
None
To matrix
to_matrix(obj, matrix = TRUE)
to_matrix(obj, matrix = TRUE)
obj |
learner/model |
matrix |
bool, to R matrix |
Same as 'thumbnail', but uses a copy
to_thumb(img, h, w = NULL)
to_thumb(img, h, w = NULL)
img |
image |
h |
height |
w |
width |
None
Distribute the training across TPUs
to_xla(object)
to_xla(object)
object |
learner / model |
None
Tokenize texts in the 'text_cols' of the csv 'fname' in parallel using 'n_workers'
tokenize_csv( fname, text_cols, outname = NULL, n_workers = 4, rules = NULL, mark_fields = NULL, tok = NULL, header = "infer", chunksize = 50000 )
tokenize_csv( fname, text_cols, outname = NULL, n_workers = 4, rules = NULL, mark_fields = NULL, tok = NULL, header = "infer", chunksize = 50000 )
fname |
file name |
text_cols |
text columns |
outname |
outname |
n_workers |
numeber of workers |
rules |
rules |
mark_fields |
mark fields |
tok |
tokenizer |
header |
header |
chunksize |
chunk size |
None
Tokenize texts in 'df[text_cols]' in parallel using 'n_workers'
tokenize_df( df, text_cols, n_workers = 6, rules = NULL, mark_fields = NULL, tok = NULL, tok_text_col = "text" )
tokenize_df( df, text_cols, n_workers = 6, rules = NULL, mark_fields = NULL, tok = NULL, tok_text_col = "text" )
df |
data frame |
text_cols |
text columns |
n_workers |
number of workers |
rules |
rules |
mark_fields |
mark_fields |
tok |
tokenizer |
tok_text_col |
tok_text_col |
None
Tokenize text 'files' in parallel using 'n_workers'
tokenize_files( files, path, output_dir, output_names = NULL, n_workers = 6, rules = NULL, tok = NULL, encoding = "utf8", skip_if_exists = FALSE )
tokenize_files( files, path, output_dir, output_names = NULL, n_workers = 6, rules = NULL, tok = NULL, encoding = "utf8", skip_if_exists = FALSE )
files |
files |
path |
path |
output_dir |
output_dir |
output_names |
output_names |
n_workers |
n_workers |
rules |
rules |
tok |
tokenizer |
encoding |
encoding |
skip_if_exists |
skip_if_exists |
None
Tokenize text files in 'path' in parallel using 'n_workers'
tokenize_folder( path, extensions = NULL, folders = NULL, output_dir = NULL, skip_if_exists = TRUE, output_names = NULL, n_workers = 6, rules = NULL, tok = NULL, encoding = "utf8" )
tokenize_folder( path, extensions = NULL, folders = NULL, output_dir = NULL, skip_if_exists = TRUE, output_names = NULL, n_workers = 6, rules = NULL, tok = NULL, encoding = "utf8" )
path |
path |
extensions |
extensions |
folders |
folders |
output_dir |
output_dir |
skip_if_exists |
skip_if_exists |
output_names |
output_names |
n_workers |
number of workers |
rules |
rules |
tok |
tokenizer |
encoding |
encoding |
None
Tokenize 'texts' in parallel using 'n_workers'
tokenize_texts(texts, n_workers = 6, rules = NULL, tok = NULL)
tokenize_texts(texts, n_workers = 6, rules = NULL, tok = NULL)
texts |
texts |
n_workers |
n_workers |
rules |
rules |
tok |
tok |
None
Call 'TokenizeWithRules' with a single text
tokenize1(text, tok, rules = NULL, post_rules = NULL)
tokenize1(text, tok, rules = NULL, post_rules = NULL)
text |
text |
tok |
tok |
rules |
rules |
post_rules |
post_rules |
None
Provides a consistent 'Transform' interface to tokenizers operating on 'DataFrame's and folders
Tokenizer( tok, rules = NULL, counter = NULL, lengths = NULL, mode = NULL, sep = " " )
Tokenizer( tok, rules = NULL, counter = NULL, lengths = NULL, mode = NULL, sep = " " )
tok |
tokenizer |
rules |
rules |
counter |
counter |
lengths |
lengths |
mode |
mode |
sep |
separator |
None
Tokenizer_from_df
Tokenizer_from_df( text_cols, tok = NULL, rules = NULL, sep = " ", n_workers = 6, mark_fields = NULL, tok_text_col = "text" )
Tokenizer_from_df( text_cols, tok = NULL, rules = NULL, sep = " ", n_workers = 6, mark_fields = NULL, tok_text_col = "text" )
text_cols |
text columns |
tok |
tokenizer |
rules |
special rules |
sep |
separator |
n_workers |
number of workers |
mark_fields |
mark fields |
tok_text_col |
output column name |
None
A wrapper around 'tok' which applies 'rules', then tokenizes, then applies 'post_rules'
TokenizeWithRules(tok, rules = NULL, post_rules = NULL)
TokenizeWithRules(tok, rules = NULL, post_rules = NULL)
tok |
tokenizer |
rules |
rules |
post_rules |
post_rules |
None
Computes the Top-k accuracy ('targ' is in the top 'k' predictions of 'inp')
top_k_accuracy(inp, targ, k = 5, axis = -1)
top_k_accuracy(inp, targ, k = 5, axis = -1)
inp |
predictions |
targ |
targets |
k |
k |
axis |
axis |
None
## Not run: loaders = loaders() data = Data_Loaders(loaders['train'], loaders['valid'])$cuda() model = nn$Sequential() + nn$Flatten() + nn$Linear(28L * 28L, 10L) metrics = list(accuracy,top_k_accuracy) learn = Learner(data, model, loss_func = F$cross_entropy, opt_func = Adam, metrics = metrics) ## End(Not run)
## Not run: loaders = loaders() data = Data_Loaders(loaders['train'], loaders['valid'])$cuda() model = nn$Sequential() + nn$Flatten() + nn$Linear(28L * 28L, 10L) metrics = list(accuracy,top_k_accuracy) learn = Learner(data, model, loss_func = F$cross_entropy, opt_func = Adam, metrics = metrics) ## End(Not run)
Give the number of parameters of a module and if it's trainable or not
total_params(m)
total_params(m)
m |
m parameter |
None
Convert item to appropriate tensor class
ToTensor(enc = NULL, dec = NULL, split_idx = NULL, order = NULL)
ToTensor(enc = NULL, dec = NULL, split_idx = NULL, order = NULL)
enc |
encoder |
dec |
decoder |
split_idx |
int, split by index |
order |
order |
None
A 'Callback' that keeps track of the best value in 'monitor'.
TrackerCallback(monitor = "valid_loss", comp = NULL, min_delta = 0)
TrackerCallback(monitor = "valid_loss", comp = NULL, min_delta = 0)
monitor |
monitor the loss |
comp |
comp |
min_delta |
minimum delta |
None
Data loader. Combines a dataset and a sampler, and provides an iterable over
train_loader()
train_loader()
the given dataset. The :class:'~torch.utils.data.DataLoader' supports both map-style and iterable-style datasets with single- or multi-process loading, customizing loading order and optional automatic batching (collation) and memory pinning.
loader
Return all trainable parameters of 'm'
trainable_params(m)
trainable_params(m)
m |
trainable parameters |
None
TrainEvalCallback
TrainEvalCallback(...)
TrainEvalCallback(...)
... |
parameters to pass |
None
Delegates ('__call__','decode','setup') to ('encodes','decodes','setups') if 'split_idx' matches
Transform(enc = NULL, dec = NULL, split_idx = NULL, order = NULL)
Transform(enc = NULL, dec = NULL, split_idx = NULL, order = NULL)
enc |
encoder |
dec |
decoder |
split_idx |
split by index |
order |
order |
None
A basic wrapper that links defaults transforms for the data block API
TransformBlock( type_tfms = NULL, item_tfms = NULL, batch_tfms = NULL, dl_type = NULL, dls_kwargs = NULL )
TransformBlock( type_tfms = NULL, item_tfms = NULL, batch_tfms = NULL, dl_type = NULL, dls_kwargs = NULL )
type_tfms |
transformation type |
item_tfms |
item transformation type |
batch_tfms |
one or several transforms applied to the batches once they are formed |
dl_type |
DL application |
dls_kwargs |
additional arguments |
block
TransformersDropOutput
TransformersDropOutput()
TransformersDropOutput()
None
TransformersTokenizer
TransformersTokenizer(tokenizer)
TransformersTokenizer(tokenizer)
tokenizer |
tokenizer object |
None
Truncated normal initialization (approximation)
trunc_normal_(x, mean = 0, std = 1)
trunc_normal_(x, mean = 0, std = 1)
x |
tensor |
mean |
mean |
std |
standard deviation |
tensor
A TimeSeries Block to process one timeseries
TSBlock(...)
TSBlock(...)
... |
parameters to pass |
None
Create a DataLoader from a df_train and df_valid
TSDataLoaders_from_dfs( df_train, df_valid, path = ".", x_cols = NULL, label_col = NULL, y_block = NULL, item_tfms = NULL, batch_tfms = NULL, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL )
TSDataLoaders_from_dfs( df_train, df_valid, path = ".", x_cols = NULL, label_col = NULL, y_block = NULL, item_tfms = NULL, batch_tfms = NULL, bs = 64, val_bs = NULL, shuffle_train = TRUE, device = NULL )
df_train |
train data |
df_valid |
validation data |
path |
path (optional) |
x_cols |
predictors |
label_col |
label/output column |
y_block |
y_block |
item_tfms |
item transformations |
batch_tfms |
batch transformations |
bs |
batch size |
val_bs |
validation batch size |
shuffle_train |
shuffle train data |
device |
device name |
None
A 'DataFrame' wrapper that knows which cols are x/y, and returns rows in '__getitem__'
TSDataTable( df, procs = NULL, x_names = NULL, y_names = NULL, block_y = NULL, splits = NULL, do_setup = TRUE, device = NULL, inplace = FALSE )
TSDataTable( df, procs = NULL, x_names = NULL, y_names = NULL, block_y = NULL, splits = NULL, do_setup = TRUE, device = NULL, inplace = FALSE )
df |
A DataFrame of your data |
procs |
list of preprocess functions |
x_names |
predictors names |
y_names |
the names of the dependent variables |
block_y |
the TransformBlock to use for the target |
splits |
How to split your data |
do_setup |
A parameter for if Tabular will run the data through the procs upon initialization |
device |
device name |
inplace |
If True, Tabular will not keep a separate copy of your original DataFrame in memory |
None
Basic Time series wrapper
TSeries(...)
TSeries(...)
... |
parameters to pass |
None
TSeries_create
TSeries_create(x, ...)
TSeries_create(x, ...)
x |
tensor |
... |
additional parameters |
tensor
## Not run: res = TSeries_create(as.array(runif(100))) res %>% show(title = 'R array') %>% plot(dpi = 200) ## End(Not run)
## Not run: res = TSeries_create(as.array(runif(100))) res %>% show(title = 'R array') %>% plot(dpi = 200) ## End(Not run)
Convenience function to easily create a config for 'DynamicUnet'
unet_config( blur = FALSE, blur_final = TRUE, self_attention = FALSE, y_range = NULL, last_cross = TRUE, bottle = FALSE, act_cls = nn()$ReLU, init = nn()$init$kaiming_normal_, norm_type = NULL )
unet_config( blur = FALSE, blur_final = TRUE, self_attention = FALSE, y_range = NULL, last_cross = TRUE, bottle = FALSE, act_cls = nn()$ReLU, init = nn()$init$kaiming_normal_, norm_type = NULL )
blur |
blur is used to avoid checkerboard artifacts at each layer. |
blur_final |
blur final is specific to the last layer. |
self_attention |
self_attention determines if we use a self attention layer at the third block before the end. |
y_range |
If y_range is passed, the last activations go through a sigmoid rescaled to that range. |
last_cross |
last cros |
bottle |
bottle |
act_cls |
activation |
init |
initializer |
norm_type |
normalization type |
None
Build a unet learner from 'dls' and 'arch'
unet_learner(dls, arch, ...)
unet_learner(dls, arch, ...)
dls |
dataloader |
arch |
architecture |
... |
additional arguments |
None
A quasi-UNet block, using 'PixelShuffle_ICNR upsampling'.
UnetBlock( up_in_c, x_in_c, hook, final_div = TRUE, blur = FALSE, act_cls = nn()$ReLU, self_attention = FALSE, init = nn()$init$kaiming_normal_, norm_type = NULL, ks = 3, stride = 1, padding = NULL, bias = NULL, ndim = 2, bn_1st = TRUE, transpose = FALSE, xtra = NULL, bias_std = 0.01, dilation = 1, groups = 1, padding_mode = "zeros" )
UnetBlock( up_in_c, x_in_c, hook, final_div = TRUE, blur = FALSE, act_cls = nn()$ReLU, self_attention = FALSE, init = nn()$init$kaiming_normal_, norm_type = NULL, ks = 3, stride = 1, padding = NULL, bias = NULL, ndim = 2, bn_1st = TRUE, transpose = FALSE, xtra = NULL, bias_std = 0.01, dilation = 1, groups = 1, padding_mode = "zeros" )
up_in_c |
up_in_c parameter |
x_in_c |
x_in_c parameter |
hook |
The hook is set to this intermediate layer to store the output needed for this block. |
final_div |
final div |
blur |
blur is used to avoid checkerboard artifacts at each layer. |
act_cls |
activation |
self_attention |
self_attention determines if we use a self-attention layer |
init |
initializer |
norm_type |
normalization type |
ks |
kernel size |
stride |
stride |
padding |
padding mode |
bias |
bias |
ndim |
number of dimensions |
bn_1st |
batch normalization 1st |
transpose |
transpose |
xtra |
xtra |
bias_std |
bias standard deviation |
dilation |
dilation |
groups |
groups |
padding_mode |
The mode of padding |
None
Unfreeze a model
unfreeze(object, ...)
unfreeze(object, ...)
object |
A model |
... |
Additional parameters |
None
## Not run: learnR %>% unfreeze() ## End(Not run)
## Not run: learnR %>% unfreeze() ## End(Not run)
Uniformly apply blurring
uniform_blur2d(x, s)
uniform_blur2d(x, s)
x |
image |
s |
effect |
None
download ADULT_SAMPLE dataset
URLs_ADULT_SAMPLE(filename = "ADULT_SAMPLE", untar = TRUE)
URLs_ADULT_SAMPLE(filename = "ADULT_SAMPLE", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
## Not run: URLs_ADULT_SAMPLE() ## End(Not run)
## Not run: URLs_ADULT_SAMPLE() ## End(Not run)
download AG_NEWS dataset
URLs_AG_NEWS(filename = "AG_NEWS", untar = TRUE)
URLs_AG_NEWS(filename = "AG_NEWS", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
## Not run: URLs_AG_NEWS() ## End(Not run)
## Not run: URLs_AG_NEWS() ## End(Not run)
download AMAZON_REVIEWS_POLARITY dataset
URLs_AMAZON_REVIEWS_POLARITY( filename = "AMAZON_REVIEWS_POLARITY", untar = TRUE )
URLs_AMAZON_REVIEWS_POLARITY( filename = "AMAZON_REVIEWS_POLARITY", untar = TRUE )
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download AMAZON_REVIEWSAMAZON_REVIEWS dataset
URLs_AMAZON_REVIEWSAMAZON_REVIEWS( filename = "AMAZON_REVIEWSAMAZON_REVIEWS", untar = TRUE )
URLs_AMAZON_REVIEWSAMAZON_REVIEWS( filename = "AMAZON_REVIEWSAMAZON_REVIEWS", untar = TRUE )
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download BIWI_HEAD_POSE dataset
URLs_BIWI_HEAD_POSE(filename = "BIWI_HEAD_POSE", untar = TRUE)
URLs_BIWI_HEAD_POSE(filename = "BIWI_HEAD_POSE", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download CALTECH_101 dataset
URLs_CALTECH_101(filename = "CALTECH_101", untar = TRUE)
URLs_CALTECH_101(filename = "CALTECH_101", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download CAMVID dataset
URLs_CAMVID(filename = "CAMVID", untar = TRUE)
URLs_CAMVID(filename = "CAMVID", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download CAMVID_TINY dataset
URLs_CAMVID_TINY(filename = "CAMVID_TINY", untar = TRUE)
URLs_CAMVID_TINY(filename = "CAMVID_TINY", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download CARS dataset
URLs_CARS(filename = "CARS", untar = TRUE)
URLs_CARS(filename = "CARS", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download CIFAR dataset
URLs_CIFAR(filename = "CIFAR", untar = TRUE)
URLs_CIFAR(filename = "CIFAR", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download CIFAR_100 dataset
URLs_CIFAR_100(filename = "CIFAR_100", untar = TRUE)
URLs_CIFAR_100(filename = "CIFAR_100", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download COCO_TINY dataset
URLs_COCO_TINY(filename = "COCO_TINY", untar = TRUE)
URLs_COCO_TINY(filename = "COCO_TINY", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download CUB_200_2011 dataset
URLs_CUB_200_2011(filename = "CUB_200_2011", untar = TRUE)
URLs_CUB_200_2011(filename = "CUB_200_2011", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download DBPEDIA dataset
URLs_DBPEDIA(filename = "DBPEDIA", untar = TRUE)
URLs_DBPEDIA(filename = "DBPEDIA", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download DOGS dataset
URLs_DOGS(filename = "DOGS", untar = TRUE)
URLs_DOGS(filename = "DOGS", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download FLOWERS dataset
URLs_FLOWERS(filename = "FLOWERS", untar = TRUE)
URLs_FLOWERS(filename = "FLOWERS", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download FOOD dataset
URLs_FOOD(filename = "FOOD", untar = TRUE)
URLs_FOOD(filename = "FOOD", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download HORSE_2_ZEBRA dataset
URLs_HORSE_2_ZEBRA(filename = "horse2zebra", unzip = TRUE)
URLs_HORSE_2_ZEBRA(filename = "horse2zebra", unzip = TRUE)
filename |
the name of the file |
unzip |
logical, whether to unzip the '.zip' file |
None
download HUMAN_NUMBERS dataset
URLs_HUMAN_NUMBERS(filename = "HUMAN_NUMBERS", untar = TRUE)
URLs_HUMAN_NUMBERS(filename = "HUMAN_NUMBERS", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download IMAGENETTE dataset
URLs_IMAGENETTE(filename = "IMAGENETTE", untar = TRUE)
URLs_IMAGENETTE(filename = "IMAGENETTE", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download IMAGENETTE_160 dataset
URLs_IMAGENETTE_160(filename = "IMAGENETTE_160", untar = TRUE)
URLs_IMAGENETTE_160(filename = "IMAGENETTE_160", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download IMAGENETTE_320 dataset
URLs_IMAGENETTE_320(filename = "IMAGENETTE_320", untar = TRUE)
URLs_IMAGENETTE_320(filename = "IMAGENETTE_320", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download IMAGEWOOF dataset
URLs_IMAGEWOOF(filename = "IMAGEWOOF", untar = TRUE)
URLs_IMAGEWOOF(filename = "IMAGEWOOF", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download IMAGEWOOF_160 dataset
URLs_IMAGEWOOF_160(filename = "IMAGEWOOF_160", untar = TRUE)
URLs_IMAGEWOOF_160(filename = "IMAGEWOOF_160", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download IMAGEWOOF_320 dataset
URLs_IMAGEWOOF_320(filename = "IMAGEWOOF_320", untar = TRUE)
URLs_IMAGEWOOF_320(filename = "IMAGEWOOF_320", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download IMDB dataset
URLs_IMDB(filename = "IMDB", untar = TRUE)
URLs_IMDB(filename = "IMDB", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download IMDB_SAMPLE dataset
URLs_IMDB_SAMPLE(filename = "IMDB_SAMPLE", untar = TRUE)
URLs_IMDB_SAMPLE(filename = "IMDB_SAMPLE", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download LSUN_BEDROOMS dataset
URLs_LSUN_BEDROOMS(filename = "LSUN_BEDROOMS", untar = TRUE)
URLs_LSUN_BEDROOMS(filename = "LSUN_BEDROOMS", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download ML_SAMPLE dataset
URLs_ML_SAMPLE(filename = "ML_SAMPLE", untar = TRUE)
URLs_ML_SAMPLE(filename = "ML_SAMPLE", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download MNIST dataset
URLs_MNIST(filename = "MNIST", untar = TRUE)
URLs_MNIST(filename = "MNIST", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download MNIST_SAMPLE dataset
URLs_MNIST_SAMPLE(filename = "MNIST_SAMPLE", untar = TRUE)
URLs_MNIST_SAMPLE(filename = "MNIST_SAMPLE", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download MNIST_TINY dataset
URLs_MNIST_TINY(filename = "MNIST_TINY", untar = TRUE)
URLs_MNIST_TINY(filename = "MNIST_TINY", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download MNIST_VAR_SIZE_TINY dataset
URLs_MNIST_VAR_SIZE_TINY(filename = "MNIST_VAR_SIZE_TINY", untar = TRUE)
URLs_MNIST_VAR_SIZE_TINY(filename = "MNIST_VAR_SIZE_TINY", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download MOVIE_LENS_ML_100k dataset
URLs_MOVIE_LENS_ML_100k(filename = "ml-100k", unzip = TRUE)
URLs_MOVIE_LENS_ML_100k(filename = "ml-100k", unzip = TRUE)
filename |
the name of the file |
unzip |
logical, whether to unzip the '.zip' file |
None
download MT_ENG_FRA dataset
URLs_MT_ENG_FRA(filename = "MT_ENG_FRA", untar = TRUE)
URLs_MT_ENG_FRA(filename = "MT_ENG_FRA", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download OPENAI_TRANSFORMER dataset
URLs_OPENAI_TRANSFORMER(filename = "OPENAI_TRANSFORMER", untar = TRUE)
URLs_OPENAI_TRANSFORMER(filename = "OPENAI_TRANSFORMER", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download PASCAL_2007 dataset
URLs_PASCAL_2007(filename = "PASCAL_2007", untar = TRUE)
URLs_PASCAL_2007(filename = "PASCAL_2007", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download PASCAL_2012 dataset
URLs_PASCAL_2012(filename = "PASCAL_2012", untar = TRUE)
URLs_PASCAL_2012(filename = "PASCAL_2012", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download PETS dataset
URLs_PETS(filename = "PETS", untar = TRUE)
URLs_PETS(filename = "PETS", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download PLANET_SAMPLE dataset
URLs_PLANET_SAMPLE(filename = "PLANET_SAMPLE", untar = TRUE)
URLs_PLANET_SAMPLE(filename = "PLANET_SAMPLE", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download PLANET_TINY dataset
URLs_PLANET_TINY(filename = "PLANET_TINY", untar = TRUE)
URLs_PLANET_TINY(filename = "PLANET_TINY", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download S3_COCO dataset
URLs_S3_COCO(filename = "S3_COCO", untar = TRUE)
URLs_S3_COCO(filename = "S3_COCO", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download S3_IMAGE dataset
URLs_S3_IMAGE(filename = "S3_IMAGE", untar = TRUE)
URLs_S3_IMAGE(filename = "S3_IMAGE", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download S3_IMAGELOC dataset
URLs_S3_IMAGELOC(filename = "S3_IMAGELOC", untar = TRUE)
URLs_S3_IMAGELOC(filename = "S3_IMAGELOC", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download S3_MODEL dataset
URLs_S3_MODEL(filename = "S3_MODEL", untar = TRUE)
URLs_S3_MODEL(filename = "S3_MODEL", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download S3_NLP dataset
URLs_S3_NLP(filename = "S3_NLP", untar = TRUE)
URLs_S3_NLP(filename = "S3_NLP", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download YELP_REVIEWS_POLARITY dataset
URLs_SIIM_SMALL(filename = "SIIM_SMALL", untar = TRUE)
URLs_SIIM_SMALL(filename = "SIIM_SMALL", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download SKIN_LESION dataset
URLs_SKIN_LESION(filename = "SKIN_LESION", untar = TRUE)
URLs_SKIN_LESION(filename = "SKIN_LESION", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download SOGOU_NEWS dataset
URLs_SOGOU_NEWS(filename = "SOGOU_NEWS", untar = TRUE)
URLs_SOGOU_NEWS(filename = "SOGOU_NEWS", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download SPEAKERS10 dataset
URLs_SPEAKERS10(filename = "SPEAKERS10", untar = TRUE)
URLs_SPEAKERS10(filename = "SPEAKERS10", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
## Not run: URLs_SPEAKERS10() ## End(Not run)
## Not run: URLs_SPEAKERS10() ## End(Not run)
download SPEECHCOMMANDS dataset
URLs_SPEECHCOMMANDS(filename = "SPEECHCOMMANDS", untar = TRUE)
URLs_SPEECHCOMMANDS(filename = "SPEECHCOMMANDS", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
## Not run: URLs_SPEECHCOMMANDS() ## End(Not run)
## Not run: URLs_SPEECHCOMMANDS() ## End(Not run)
download WIKITEXT dataset
URLs_WIKITEXT(filename = "WIKITEXT", untar = TRUE)
URLs_WIKITEXT(filename = "WIKITEXT", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download WIKITEXT_TINY dataset
URLs_WIKITEXT_TINY(filename = "WIKITEXT_TINY", untar = TRUE)
URLs_WIKITEXT_TINY(filename = "WIKITEXT_TINY", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download WT103_BWD dataset
URLs_WT103_BWD(filename = "WT103_BWD", untar = TRUE)
URLs_WT103_BWD(filename = "WT103_BWD", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download WT103_FWD dataset
URLs_WT103_FWD(filename = "WT103_FWD", untar = TRUE)
URLs_WT103_FWD(filename = "WT103_FWD", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download YAHOO_ANSWERS dataset
URLs_YAHOO_ANSWERS(filename = "YAHOO_ANSWERS", untar = TRUE)
URLs_YAHOO_ANSWERS(filename = "YAHOO_ANSWERS", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download YELP_REVIEWS dataset
URLs_YELP_REVIEWS(filename = "YELP_REVIEWS", untar = TRUE)
URLs_YELP_REVIEWS(filename = "YELP_REVIEWS", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
download YELP_REVIEWS_POLARITY dataset
URLs_YELP_REVIEWS_POLARITY(filename = "YELP_REVIEWS_POLARITY", untar = TRUE)
URLs_YELP_REVIEWS_POLARITY(filename = "YELP_REVIEWS_POLARITY", untar = TRUE)
filename |
the name of the file |
untar |
logical, whether to untar the '.tgz' file |
None
VGG 11-layer model (configuration "A") with batch normalization
vgg11_bn(pretrained = FALSE, progress)
vgg11_bn(pretrained = FALSE, progress)
pretrained |
pretrained or not |
progress |
to see progress bar or not |
"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>
model
VGG 13-layer model (configuration "B") with batch normalization
vgg13_bn(pretrained = FALSE, progress)
vgg13_bn(pretrained = FALSE, progress)
pretrained |
pretrained or not |
progress |
to see progress bar or not |
"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>
model
VGG 16-layer model (configuration "D") with batch normalization
vgg16_bn(pretrained = FALSE, progress)
vgg16_bn(pretrained = FALSE, progress)
pretrained |
pretrained or not |
progress |
to see progress bar or not |
"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>
model
VGG 19-layer model (configuration 'E') with batch normalization
vgg19_bn(pretrained = FALSE, progress)
vgg19_bn(pretrained = FALSE, progress)
pretrained |
pretrained or not |
progress |
to see progress bar or not |
"Very Deep Convolutional Networks For Large-Scale Image Recognition" <https://arxiv.org/pdf/1409.1556.pdf>
model
'F$leaky_relu' with 0.3 slope
vleaky_relu(input, inplace = TRUE)
vleaky_relu(input, inplace = TRUE)
input |
inputs |
inplace |
inplace or not |
None
Voice
Voice( sample_rate = 16000, n_fft = 1024, win_length = NULL, hop_length = 128, f_min = 50, f_max = 8000, pad = 0, n_mels = 128, window_fn = torch()$hann_window, power = 2, normalized = FALSE, wkwargs = NULL, mel = TRUE, to_db = TRUE )
Voice( sample_rate = 16000, n_fft = 1024, win_length = NULL, hop_length = 128, f_min = 50, f_max = 8000, pad = 0, n_mels = 128, window_fn = torch()$hann_window, power = 2, normalized = FALSE, wkwargs = NULL, mel = TRUE, to_db = TRUE )
sample_rate |
sample rate |
n_fft |
number of fast fourier transforms |
win_length |
windowing length |
hop_length |
hopping length |
f_min |
minimum frequency |
f_max |
maximum frequency |
pad |
padding mode |
n_mels |
number of mel-spectrograms |
window_fn |
window function |
power |
power |
normalized |
normalized or not |
wkwargs |
additional arguments |
mel |
mel-spectrogram or not |
to_db |
to decibels |
None
Saves model topology, losses & metrics
WandbCallback( log = "gradients", log_preds = TRUE, log_model = TRUE, log_dataset = FALSE, dataset_name = NULL, valid_dl = NULL, n_preds = 36, seed = 12345, reorder = TRUE )
WandbCallback( log = "gradients", log_preds = TRUE, log_model = TRUE, log_dataset = FALSE, dataset_name = NULL, valid_dl = NULL, n_preds = 36, seed = 12345, reorder = TRUE )
log |
"gradients" (default), "parameters", "all" or None. Losses & metrics are always logged. |
log_preds |
whether we want to log prediction samples (default to True). |
log_model |
whether we want to log our model (default to True). This also requires SaveModelCallback. |
log_dataset |
Options: - False (default) - True will log folder referenced by learn.dls.path. - a path can be defined explicitly to reference which folder to log. Note: subfolder "models" is always ignored. |
dataset_name |
name of logged dataset (default to folder name). |
valid_dl |
DataLoaders containing items used for prediction samples (default to random items from learn.dls.valid. |
n_preds |
number of logged predictions (default to 36). |
seed |
used for defining random samples. |
reorder |
reorder or not |
None
Apply perspective warping with 'magnitude' and 'p' on a batch of matrices
Warp( magnitude = 0.2, p = 0.5, draw_x = NULL, draw_y = NULL, size = NULL, mode = "bilinear", pad_mode = "reflection", batch = FALSE, align_corners = TRUE )
Warp( magnitude = 0.2, p = 0.5, draw_x = NULL, draw_y = NULL, size = NULL, mode = "bilinear", pad_mode = "reflection", batch = FALSE, align_corners = TRUE )
magnitude |
magnitude |
p |
probability |
draw_x |
draw x |
draw_y |
draw y |
size |
size |
mode |
mode |
pad_mode |
padding mode |
batch |
batch |
align_corners |
align corners |
None
Plots an explanation of a single prediction as a waterfall plot. Accepts a row_index and class_id.
waterfall_plot(object, row_idx = NULL, class_id = 0, dpi = 200, ...)
waterfall_plot(object, row_idx = NULL, class_id = 0, dpi = 200, ...)
object |
ShapInterpretation object |
row_idx |
is the index of the row chosen in test_data to be analyzed, which defaults to zero. |
class_id |
Accepts a class_id which is used to indicate the class of interest for a classification model. It can either be an int or str representation for a class of choice. |
dpi |
dots per inch |
... |
additional arguments |
None
Weight decay as decaying 'p' with 'lr*wd'
weight_decay(p, lr, wd, do_wd = TRUE, ...)
weight_decay(p, lr, wd, do_wd = TRUE, ...)
p |
p |
lr |
learning rate |
wd |
weight decay |
do_wd |
do_wd |
... |
additional args to pass |
None
## Not run: tst_param = function(val, grad = NULL) { "Create a tensor with `val` and a gradient of `grad` for testing" res = tensor(val) %>% float() if(is.null(grad)) { grad = tensor(val / 10) } else { grad = tensor(grad) } res$grad = grad %>% float() res } p = tst_param(1., 0.1) weight_decay(p, 1., 0.1) ## End(Not run)
## Not run: tst_param = function(val, grad = NULL) { "Create a tensor with `val` and a gradient of `grad` for testing" res = tensor(val) %>% float() if(is.null(grad)) { grad = tensor(val / 10) } else { grad = tensor(grad) } res$grad = grad %>% float() res } p = tst_param(1., 0.1) weight_decay(p, 1., 0.1) ## End(Not run)
A module that wraps another layer in which some weights will be replaced by 0 during training.
WeightDropout(module, weight_p, layer_names = "weight_hh_l0")
WeightDropout(module, weight_p, layer_names = "weight_hh_l0")
module |
module |
weight_p |
weight_p |
layer_names |
layer_names |
None
Transformed 'DataLoader'
WeightedDL( dataset = NULL, bs = NULL, wgts = NULL, shuffle = FALSE, num_workers = NULL, verbose = FALSE, do_setup = TRUE, pin_memory = FALSE, timeout = 0, batch_size = NULL, drop_last = FALSE, indexed = NULL, n = NULL, device = NULL, persistent_workers = FALSE )
WeightedDL( dataset = NULL, bs = NULL, wgts = NULL, shuffle = FALSE, num_workers = NULL, verbose = FALSE, do_setup = TRUE, pin_memory = FALSE, timeout = 0, batch_size = NULL, drop_last = FALSE, indexed = NULL, n = NULL, device = NULL, persistent_workers = FALSE )
dataset |
dataset |
bs |
bs |
wgts |
weights |
shuffle |
shuffle |
num_workers |
number of workers |
verbose |
verbose |
do_setup |
do_setup |
pin_memory |
pin_memory |
timeout |
timeout |
batch_size |
batch_size |
drop_last |
drop_last |
indexed |
indexed |
n |
n |
device |
device |
persistent_workers |
persistent_workers |
None
A sequential container.
XResNet(block, expansion, layers, c_in = 3, c_out = 1000, ...)
XResNet(block, expansion, layers, c_in = 3, c_out = 1000, ...)
block |
the blocks to pass to XResNet |
expansion |
argument for inputs and filters |
layers |
the layers to pass to XResNet |
c_in |
number of inputs |
c_out |
number of outputs |
... |
additional arguments |
Load model architecture
xresnet101(...)
xresnet101(...)
... |
parameters to pass |
model
Load model architecture
xresnet152(...)
xresnet152(...)
... |
parameters to pass |
model
Load model architecture
xresnet18(...)
xresnet18(...)
... |
parameters to pass |
model
Load model architecture
xresnet18_deep(...)
xresnet18_deep(...)
... |
parameters to pass |
model
Load model architecture
xresnet18_deeper(...)
xresnet18_deeper(...)
... |
parameters to pass |
model
Load model architecture
xresnet34(...)
xresnet34(...)
... |
parameters to pass |
model
Load model architecture
xresnet34_deep(...)
xresnet34_deep(...)
... |
parameters to pass |
model
Load model architecture
xresnet34_deeper(...)
xresnet34_deeper(...)
... |
parameters to pass |
model
Load model architecture
xresnet50(...)
xresnet50(...)
... |
parameters to pass |
model
Load model architecture
xresnet50_deep(...)
xresnet50_deep(...)
... |
parameters to pass |
model
Load model architecture
xresnet50_deeper(...)
xresnet50_deeper(...)
... |
parameters to pass |
model
Load model architecture
xresnext101(...)
xresnext101(...)
... |
parameters to pass |
model
Load model architecture
xresnext18(...)
xresnext18(...)
... |
parameters to pass |
model
Load model architecture
xresnext34(...)
xresnext34(...)
... |
parameters to pass |
model
Load model architecture
xresnext50(...)
xresnext50(...)
... |
parameters to pass |
model
Load model architecture
xse_resnet101(...)
xse_resnet101(...)
... |
parameters to pass |
model
Load model architecture
xse_resnet152(...)
xse_resnet152(...)
... |
parameters to pass |
model
Load model architecture
xse_resnet18(...)
xse_resnet18(...)
... |
parameters to pass |
model
Load model architecture
xse_resnet34(...)
xse_resnet34(...)
... |
parameters to pass |
model
Load model architecture
xse_resnet50(...)
xse_resnet50(...)
... |
parameters to pass |
model
Load model architecture
xse_resnext101(...)
xse_resnext101(...)
... |
parameters to pass |
model
Load model architecture
xse_resnext18(...)
xse_resnext18(...)
... |
parameters to pass |
model
Load model architecture
xse_resnext18_deep(...)
xse_resnext18_deep(...)
... |
parameters to pass |
model
Load model architecture
xse_resnext18_deeper(...)
xse_resnext18_deeper(...)
... |
parameters to pass |
model
Load model architecture
xse_resnext34(...)
xse_resnext34(...)
... |
parameters to pass |
model
Load model architecture
xse_resnext34_deep(...)
xse_resnext34_deep(...)
... |
parameters to pass |
model
Load model architecture
xse_resnext34_deeper(...)
xse_resnext34_deeper(...)
... |
parameters to pass |
model
Load model architecture
xse_resnext50(...)
xse_resnext50(...)
... |
parameters to pass |
model
Load model architecture
xse_resnext50_deep(...)
xse_resnext50_deep(...)
... |
parameters to pass |
model
Load model architecture
xse_resnext50_deeper(...)
xse_resnext50_deeper(...)
... |
parameters to pass |
model
Load model architecture
xsenet154(...)
xsenet154(...)
... |
parameters to pass |
model
Zoom
zoom(img, ratio)
zoom(img, ratio)
img |
image files |
ratio |
ratio |
image
Apply a random zoom of at most 'max_zoom' with probability 'p' to a batch of images
Zoom_( min_zoom = 1, max_zoom = 1.1, p = 0.5, draw = NULL, draw_x = NULL, draw_y = NULL, size = NULL, mode = "bilinear", pad_mode = "reflection", batch = FALSE, align_corners = TRUE )
Zoom_( min_zoom = 1, max_zoom = 1.1, p = 0.5, draw = NULL, draw_x = NULL, draw_y = NULL, size = NULL, mode = "bilinear", pad_mode = "reflection", batch = FALSE, align_corners = TRUE )
min_zoom |
minimum zoom |
max_zoom |
maximum zoom |
p |
probability |
draw |
draw |
draw_x |
draw x |
draw_y |
draw y |
size |
size |
mode |
mode |
pad_mode |
pad mode |
batch |
batch |
align_corners |
align corners or not |
None
Return a random zoom matrix with 'max_zoom' and 'p'
zoom_mat( x, min_zoom = 1, max_zoom = 1.1, p = 0.5, draw = NULL, draw_x = NULL, draw_y = NULL, batch = FALSE )
zoom_mat( x, min_zoom = 1, max_zoom = 1.1, p = 0.5, draw = NULL, draw_x = NULL, draw_y = NULL, batch = FALSE )
x |
tensor |
min_zoom |
minimum zoom |
max_zoom |
maximum zoom |
p |
probability |
draw |
draw |
draw_x |
draw x |
draw_y |
draw y |
batch |
batch |
None