Supported segmentation mask types:
Class labels: pixel annotations of classes (e.g., 0 for background and 1...n for positive classes)
Instance labels: pixel annotation of belonggig to different instance (e.g., 0 for background, 1 for first ROI, 2 for second ROI, etc.).
show(mask, inst_labels)
The provided segmentation masks are preprocessed to
- convert instance labels to class labels
- draw small ridges between touching instances (optional)
Arguments in preprocess_masks
:
clabels
: class labels (segmentation mask),instlabels
: instance labels (segmentation mask),n_dims
(int) = number of classes forclabels
tst1 = preprocess_mask(mask, remove_overlap=False)
tst2 = preprocess_mask(inst_labels, instlabels=True)
show(tst1,tst2)
ind = (slice(200,230), slice(230,260))
print('Zoom in on borders:')
show(tst1[ind], tst2[ind])
Effective sampling: Probability density function (PDF)
labels
: preprocessed class labels (segmentation mask)ignore
: ignored reagions,fbr
(float): foreground_background_ratio to define the sampling PDFscale
(bool): limit size of pdf
pdf = create_pdf(tst2, scale=None)
test_eq(pdf.shape[0],tst2.shape[0]*tst2.shape[1])
scale = 512
pdf = create_pdf(tst2, scale=scale)
test_close(pdf.max(),1)
Random center
centers = [random_center(pdf, mask.shape) for _ in range(int(5e+2))]
plt.imshow(mask)
xs = [x[1] for x in centers]
ys = [x[0] for x in centers]
plt.scatter(x=xs, y=ys, c='r', s=10)
plt.show()
We calculate the weight for the weighted softmax cross entropy loss from the given mask (classlabels).
!! Attention: calculate_weights is not used for training anymore!! See real-time weight calculation
Arguments in calculate_weights
:
clabels
: class labels (segmentation mask),instlabels
: instance labels (segmentation mask),ignore
: ignored reagions,n_dims
(int) = number of classes forclabels
bws
(float): border_weight_sigma in pixelfds
(float): foreground_dist_sigma in pixelbwf
(float): border_weight_factorfbr
(float): foreground_background_ratio
labels, weights, _ = calculate_weights(clabels=mask)
titles = ['Labels (Mask)', 'Weights', 'PDF', ]
show(labels, weights)
Plot different weight parameters (foreground_dist_sigma_px
, border_weight_factor
)
To efficiently calculate the mask weights for training we leverage the LogConv apporach for fast convolutional distance transform based on this paper: Karam, Christina, Kenjiro Sugimoto, and Keigo Hirakawa. "Fast convolutional distance transform." IEEE Signal Processing Letters 26.6 (2019): 853-857.
Our implementation in Pytorch leverages
- Separable convolutions
- GPU accelaration
We use a lambda=0.35 and a kernel size of 73.
test_eq(lambda_kernel(3, 0.35)[1],1)
inp1 = torch.eye(3)[inst_labels][...,1:]
inp1 = inp1.permute(2,0,1)
tst = SeparableConv2D(0.35, channels=inp1.size(-1))
out = tst(inp1)
show(out[0], out[1])
Single item version for CPU from input shape [ROIS, H, W]
tst = WeightTransformSingle(channels=inp1.size(-1))
out = tst(inp1)
show(mask>0, mask, out)
Batch version for GPU transforms from instance labels with shape [batch, H, W]
inp2 = torch.Tensor(inst_labels)#.cuda()
inp2 = inp2.view(1, *inp2.shape)
tst = WeightTransform(channels=inp2.size(-1))
out = tst(inp2)
show(mask>0, mask, out[0])