Skip to main content
Advertisement

< Back to Article

Fig 1.

One of the major differences between DAZZLE and DeepSEM is the use of Dropout Augmentation.

Dropout augmentation regularizes model training by simulating small amounts of random dropout at each training iteration such that the model is protected against the negative impact of dropout noise. Rounded boxes indicate trainable model parameters.

More »

Fig 1 Expand

Table 1.

DAZZLE shows improved GRN inference capacity on BEELINE benchmarks.

More »

Table 1 Expand

Fig 2.

Appropriate amount of augmented dropout helps maintain model robustness and may contribute to better performance.

Color reflects the probability of dropout augmentation. The two thick lines represent two important conditions, 0% - no augmented dropout, and 10% - the default dropout augmentation level we recommend. Dashed lines show the default number of training iterations used in DeepSEM and DAZZLE.

More »

Fig 2 Expand

Fig 3.

Comparison of 100 runs of DeepSEM-1x and DAZZLE-1x on hESC evaluated using the STRING network.

In DeepSEM-1x, the prioritization of the L1 sparsity control at early stage is the main cause of unstable GRN inference performance. DAZZLE solves this issue by delaying the introduction of the L1 sparse loss by 5 iterations. a) Histogram of AUPRC ratios. b) Sparse loss over time for DeepSEM-1X, colored by AUPRR at convergence. c) Sparse loss over time for DAZZLE-1x, colored by AUPRR at convergence.

More »

Fig 3 Expand

Fig 4.

a. Top 10 regulated genes at each life stage in mouse microglia.

b. Predicted local networks around Tmem119 and Apoe in mouse microglia. Here edge weights are min-max scaled at each time point. Top genes are selected according to the maximum weight at all time points.

More »

Fig 4 Expand