Extended Data Fig. 2: PEGSNet architecture and training strategy. | Nature

Extended Data Fig. 2: PEGSNet architecture and training strategy.

From: Instantaneous tracking of earthquake growth with elastogravity signals

Extended Data Fig. 2

a, The input data for one example consists of a three-channel image of shape M × N, where M is the number of time samples and N is the number of seismic stations. Only the vertical component of the input data is displayed for simplicity. Each convolutional block is composed of a convolutional layer (yellow) with a ReLu activation (orange), a spatial dropout layer (light blue). Max pooling layers (red) reduce each dimension of the input data by a factor of two. The number of filters used in each convolutional layer is indicated for clarity. The last convolutional block is connected to dense layers (purple) (using a ReLu activation function), with dropout (light blue). The output layer uses a tanh activation function to predict values of Mw, latitude (φ) and longitude (λ). b, The value of the Huber loss is plotted as a function of epochs for the training (Train.) and validation (Val.) sets. Each epoch corresponds to a full pass over the training set in batches of size 512. The red star indicates the epoch with the minimum value of the loss on the validation set. The corresponding model is used for predictions on the test set and real data. c, Data from one example from the training database (vertical component). The grey shaded area corresponds to the input data for PEGSNet shown in a. T1 and T2 are the beginning and end of the selected input window. During training, T1 is selected at random and T2 = T1 + 315 s. d, Moment rate (blue) and Mw(t) (dark grey) for the selected event. Given the randomly selected value for T1 for this example, the corresponding label is Mw(T2), that is, at the end of the selected window. This is compared with the predicted Mw estimated by PEGSNet in a and used for training.

Source data

Back to article page