Home
All-cause mortality data for Germany [2000 - 2024]
*Data derived from DESTATIS, the official office of German population statistics.
The Federal Statistical Office (Statistisches Bundesamt, or Destatis) is a federal agency of Germany. It reports to the Federal Ministry of the Interior.
The Office is responsible for collecting, processing, presenting, and analyzing statistical information related to the economy, society, and environment.
The purpose is to provide objective, independent, and high-quality statistical data for scientists and the general public.
Modelling
"There is no excuse for failing to plot and look."
“The greatest value of a picture is when it forces us to notice what we never expected to see.”
– John Tukey
Cf.: Tukey, J. W. (1977). Exploratory Data Analysis. Addison-Wesley.
+ + +
Click to open interactive 2.5D Version
For detailed analyses up to 2023 see: Rockenfeller R, Günther M, Mörl F. 2023 Reports of deaths are an exaggeration: all-cause and NAA-test-conditional mortality in Germany during the SARS-CoV-2 era. R. Soc. Open Sci. 10: 221551. https://doi.org/10.1098/rsos.221551
Associated Python Code
0%
Preloading data...
arima app
All-cause mortality data for Germany [2000 - 2024]
This application was created in R by Dr. Christopher B. Germann (June, 2024).

See also the following PhD thesis by Sievert (2016):
Interfacing R with Web Technologies for Interactive Statistical Graphics and Computing with Data.
DOI: https://doi.org/10.31274/etd-180810-5044
URL: https://dr.lib.iastate.edu/handle/20.500.12876/29605

References
*Van Der Donckt, J., Van Der Donckt, J., Deprost, E., & Van Hoecke, S. (2022). Plotly-Resampler: Effective Visual Analytics for Large Time Series. Proceedings - 2022 IEEE Visualization Conference - Short Papers, VIS 2022. https://doi.org/10.1109/VIS54862.2022.00013

*Sunitha, G., Sriharsha, A. V., Yalgashev, O., & Mamatov, I. (2023). Interactive visualization with plotly express. In Advanced Applications of Python Data Structures and Algorithms. https://doi.org/10.4018/978-1-6684-7100-5.ch009

*Mundargi, Z. K., Patel, K., Patel, A., More, R., Pathrabe, S., & Patil, S. (2023). Plotplay: An Automated Data Visualization Website using Python and Plotly. 2023 International Conference for Advancement in Technology, ICONAT 2023. https://doi.org/10.1109/ICONAT57137.2023.10079977

*Hyndman, R. J., & Koehler, A. B. (2006). Another look at measures of forecast accuracy. International Journal of Forecasting, 22(4), 679–688. https://doi.org/10.1016/j.ijforecast.2006.03.001
N.B. The PDF download function may not work because it depends on your browser configuration. The application is embedded in a sandboxed iframe and you need to open it as a standalone application in a new window if the download does not work.

Permanent URL of the ARIMA app (standalone):
https://neuropsy.shinyapps.io/arima-v2/
The generic ARIMA model (p, d, q) is given by the following equation:

$$ \phi_p(B) (1 - B)^d y_t = \theta_q(B) \epsilon_t $$

Where:

  • \(B\) is the backshift operator, \(B^k y_t = y_{t-k}\)
  • \(d\) is the order of differencing
  • \(y_t\) is the observed time series
  • \(\epsilon_t\) is the white noise error term

The autoregressive (AR) part is given by:

$$ \phi_p(B) = 1 - \phi_1 B - \phi_2 B^2 - \cdots - \phi_p B^p $$

The moving average (MA) part is given by:

$$ \theta_q(B) = 1 + \theta_1 B + \theta_2 B^2 + \cdots + \theta_q B^q $$

Thus, the full ARIMA (\(p, d, q\)) model can be written as:

$$ (1 - \phi_1 B - \phi_2 B^2 - \cdots - \phi_p B^p) (1 - B)^d y_t = (1 + \theta_1 B + \theta_2 B^2 + \cdots + \theta_q B^q) \epsilon_t $$

Slide
All-cause mortality data for Germany [2000 - 2024]
This application was created in R by Dr. Christopher B. Germann (June, 2024).

See also:
https://facebook.github.io/prophet/
https://cran.r-project.org/web/packages/rstan/vignettes/rstan.html

References
*Mo, J., Wang, R., Cao, M., Yang, K., Yang, X., & Zhang, T. (2023). A hybrid temporal convolutional network and Prophet model for power load forecasting. Complex & Intelligent Systems, 9(4), 4249–4261. https://doi.org/10.1007/s40747-022-00952-x

*Aziz, M. I. A., Barawi, M. H., & Shahiri, H. (2021). Is Facebook Prophet Superior than Hybrid Arima Model to Forecast Crude Oil Price? Sains Malaysiana, 51(8), 2633–2643. https://doi.org/10.17576/jsm-2022-5108-22

*Noviandy, T. R., Maulana, A., Idroes, G. M., Suhendra, R., Adam, M., Rusyana, A., & Sofyan, H. (2023). Deep Learning-Based Bitcoin Price Forecasting Using Neural Prophet. Ekonomikalia Journal of Economics, 1(1), 19–25. https://doi.org/10.60084/eje.v1i1.51

*Chaturvedi, S., Rajasekar, E., Natarajan, S., & McCullen, N. (2022). A comparative assessment of SARIMA, LSTM RNN and Fb Prophet models to forecast total and peak monthly energy demand for India. Energy Policy. https://doi.org/10.1016/j.enpol.2022.113097

*Desai, M., & Shingala, A. (2023). Time Series Prediction of Wheat Crop based on FB Prophet Forecast Framework. ITM Web of Conferences, 53, 02014. https://doi.org/10.1051/itmconf/20235302014
N.B. This is a rather complex model and the computations take time, even though parallel and sequential processing is utilized via multisessions and promises: https://rstudio.github.io/promises/articles/promises_06_shiny.html
You will see progress indicators when data is processed. Patience is a virtue...

This is the permanent URL of the app:
https://neuropsy.shinyapps.io/prophet-v3

The basic Prophet model is given by the following equation:

$$ y(t) = g(t) + s(t) + h(t) + \epsilon_t $$

Where:

  • \(y(t)\) is the observed value at time \(t\)
  • \(g(t)\) is the trend function which models non-periodic changes
  • \(s(t)\) represents seasonal effects (daily, weekly, yearly)
  • \(h(t)\) represents the effects of holidays
  • \(\epsilon_t\) is the error term which represents any idiosyncratic changes not captured by the model

Slide
Under Construction...
"Medicine is a social science, and politics is nothing more than medicine on a grand scale."
~ Rudolf Virchow

"It is the curse of humanity that it learns to tolerate even the most horrible situations by habituation, that it forgets the most shameful happenings in the daily shame of events, and that it can hardly understand when individuals aim to destroy this infamy."
~ Rudolf Virchow
Slide
Bayesian Structural Time Series (BSTS)
R script
# Function to check if a package is installed, and install it if not
install_if_missing <- function(p) {
  if (!require(p, character.only = TRUE)) {
    install.packages(p, dependencies = TRUE)
    library(p, character.only = TRUE)
  }
}
# List of required packages
packages <- c("bsts", "Boom", "ggplot2", "dplyr")
lapply(packages, install_if_missing)
# Load necessary libraries
library(bsts)
library(Boom)
library(ggplot2)
library(dplyr)
# Load the dataset
data <- read.csv("arima/filtered_mortality_data_for_r.csv")
data$date <- as.Date(data$date)
# Log-transform the mortality rate for stability
data$log_mortality_rate <- log(data$mortality_rate + 1e-6)
# Define the state space model
ss <- list()
ss <- AddLocalLevel(ss, data$log_mortality_rate)
ss <- AddSeasonal(ss, data$log_mortality_rate, nseasons = 52)  # Assuming weekly seasonality
# Fit the BSTS model using MCMC
model <- bsts(log_mortality_rate ~ pcr_rate + vaccine_rate,
              state.specification = ss, 
              data = data, 
              niter = 1000)
# Plot the fitted model
plot(model)
# Prepare new data for prediction
horizon <- 52  # Predict for the next 52 weeks (1 year)
last_date <- max(data$date)
new_dates <- seq(last_date + 7, by = "week", length.out = horizon)
# Assuming the future rates are known or can be estimated; here we use the last known values as placeholders
new_pcr_rate <- tail(data$pcr_rate, 1)
new_vaccine_rate <- tail(data$vaccine_rate, 1)
newdata <- data.frame(
  date = new_dates,
  pcr_rate = rep(new_pcr_rate, horizon),
  vaccine_rate = rep(new_vaccine_rate, horizon)
)
# Predict future values
pred <- predict(model, horizon = horizon, newdata = newdata)
predicted_values <- exp(pred$mean) - 1e-6
# Plot the predictions
pred_data <- data.frame(
  date = new_dates,
  predicted_mortality_rate = predicted_values
)
# Plot with black background and contrasting colors
ggplot(data, aes(x = date, y = mortality_rate)) +
  geom_line(color = "cyan") +
  geom_line(data = pred_data, aes(x = date, y = predicted_mortality_rate), color = "yellow") +
  labs(title = "Mortality Rate Forecast",
       x = "Date",
       y = "Mortality Rate") +
  theme(
    panel.background = element_rect(fill = "black"),
    plot.background = element_rect(fill = "black"),
    panel.grid.major = element_line(color = "gray"),
    panel.grid.minor = element_line(color = "gray"),
    text = element_text(color = "white"),
    axis.text = element_text(color = "white"),
    axis.title = element_text(color = "white"),
    plot.title = element_text(color = "white")
  )
# Summary of output
summary_table <- data.frame(
  Date = new_dates,
  Predicted_Mortality_Rate = predicted_values
)
# Save the summary table as HTML
html_summary <- paste("",
                      paste("",
                            paste("", collapse = ""),
                            collapse = ""),
                      "
DatePredicted Mortality Rate
", summary_table$Date, "", summary_table$Predicted_Mortality_Rate, "
") write(html_summary, file = "summary_table.html") html_summary
Bayesian Time Series Forcasting

Bayesian Time Series Forcasting

This R script performs Bayesian time series forecasting on a mortality rate dataset using the Bayesian Structural Time Series (BSTS) model. The key steps are as follows:

Load and Prepare Data: The dataset is read from a CSV file, and the date column is converted to a Date type. The mortality rate is log-transformed to stabilize variance.

Define State Space Model: A state space model is specified with a local level component and a seasonal component, assuming weekly seasonality.

Fit the BSTS Model: The model is fitted using the bsts package, incorporating pcr_rate and vaccine_rate as regression components. MCMC methods are used for parameter estimation.

Prepare New Data for Prediction: New data for the prediction horizon (next 52 weeks) is created, using the last known values of pcr_rate and vaccine_rate as placeholders.

Make Predictions: The fitted model is used to predict future mortality rates, providing a forecast for the next 52 weeks.

Plot Results: The script visualizes the original mortality rates along with the predicted values, alongside the forecasted trend.

MCMC Description

Markov Chain Monte Carlo (MCMC) methods offer several advantages for time series analysis, particularly when it comes to decomposition and the estimation of complex models. Here are some key benefits:

Advantages of MCMC for Time Series Analysis

  • Bayesian Framework:
    • Uncertainty Quantification: MCMC methods provide a natural framework for Bayesian inference, which allows for a full probabilistic description of uncertainty in model parameters and predictions.
    • Prior Information: Bayesian models can incorporate prior information, which can be particularly useful in time series analysis where domain knowledge is available.
  • Handling Complex Models:
    • Non-linear and Non-Gaussian Models: MCMC can handle models that are non-linear or have non-Gaussian noise distributions, which are often challenging for traditional methods.
    • Hierarchical Models: MCMC is well-suited for hierarchical models where parameters at different levels of the hierarchy are estimated simultaneously.
  • Decomposition of Time Series:
    • Flexible Trend and Seasonality Components: MCMC allows for flexible modeling of trend and seasonality components, accommodating changes over time.
    • State Space Models: MCMC is particularly effective for estimating state space models, where the time series is decomposed into unobserved components such as trend, seasonality, and noise.
  • Parameter Estimation:
    • Posterior Distributions: MCMC methods provide posterior distributions for model parameters, rather than point estimates, giving a more complete picture of parameter uncertainty.
    • Joint Estimation: MCMC jointly estimates all parameters, accounting for dependencies between them, which can improve the robustness and interpretability of the model.
  • Forecasting and Prediction:
    • Predictive Distributions: Bayesian forecasting with MCMC yields predictive distributions, which can be used to calculate prediction intervals and assess forecast uncertainty.
    • Scenario Analysis: MCMC enables scenario analysis by generating samples from the posterior predictive distribution under different assumptions or future conditions.

Specific Advantages for Time Series Decomposition

  • Robust Trend and Seasonality Estimates:
    • MCMC methods provide robust estimates of trend and seasonality by averaging over many possible decompositions, leading to more reliable and stable estimates.
    • The ability to model complex seasonality patterns that may vary over time.
  • Quantifying Uncertainty in Decomposition:
    • MCMC allows for the quantification of uncertainty in each component of the decomposition, such as the trend, seasonal effects, and residuals. This is crucial for understanding the confidence in the decomposed components.
    • Confidence intervals for the trend and seasonal components can be directly obtained from the posterior samples.
  • Incorporating Structural Changes:
    • MCMC methods can detect and accommodate structural changes in the time series, such as changes in the trend or seasonality, by allowing for model parameters to evolve over time.
    • The ability to incorporate breakpoints or changepoints in the series where the pattern changes significantly.
  • Handling Missing Data:
    • MCMC can naturally handle missing data by treating missing values as additional parameters to be estimated, leading to more accurate decompositions in the presence of incomplete data.

Example: Bayesian Structural Time Series (BSTS) with MCMC

The Bayesian Structural Time Series (BSTS) model, which we used in the previous script, leverages MCMC for parameter estimation. Here’s a brief recap of how it works:

  • Model Components: BSTS models decompose the time series into components such as local level (trend), seasonal effects, and regression effects (e.g., PCR rate, vaccine rate).
  • MCMC for Estimation: MCMC is used to draw samples from the posterior distributions of the model parameters, providing a comprehensive view of parameter uncertainty.
  • Posterior Predictions: Predictions are made by sampling from the posterior predictive distribution, yielding not just point forecasts but full predictive intervals.

By using MCMC, the BSTS model effectively handles the complexities of real-world time series data, including non-linear trends, varying seasonality, and external regressors, all while providing detailed uncertainty quantification.

Conclusion

MCMC methods bring a wealth of advantages to time series analysis and decomposition, offering a robust, flexible, and comprehensive approach to modeling, parameter estimation, and forecasting. The ability to quantify uncertainty and incorporate prior knowledge makes MCMC a powerful tool for analysts dealing with complex time series data.

MCMC vs. ARIMA

The choice between MCMC-based models and ARIMA (AutoRegressive Integrated Moving Average) models depends on the specific characteristics of the time series data and the goals of the analysis. Both approaches have their strengths and weaknesses, and their suitability can vary depending on the context. Below is a comparison of MCMC-based models and ARIMA models.

MCMC-Based Models

Strengths

  • Bayesian Framework:
    • Uncertainty Quantification: MCMC provides a full probabilistic description of uncertainty in model parameters and predictions.
    • Incorporation of Prior Knowledge: Prior information can be integrated into the model, which can be useful in situations where domain knowledge is available.
  • Flexibility:
    • Non-linear and Non-Gaussian Models: MCMC methods can handle non-linear relationships and non-Gaussian error distributions, making them more flexible in capturing complex patterns in the data.
    • Hierarchical and Multi-level Models: These models can naturally incorporate hierarchical structures and multi-level dependencies.
  • Decomposition and Structural Components:
    • State Space Models: MCMC is particularly effective for state space models, which decompose the time series into unobserved components such as trend, seasonality, and irregular fluctuations.
    • Handling Structural Changes: MCMC can model structural changes in the data, such as changes in trend or seasonality, by allowing for parameter evolution over time.
  • Forecasting:
    • Predictive Distributions: MCMC methods provide full predictive distributions, allowing for the estimation of prediction intervals and the assessment of forecast uncertainty.
    • Scenario Analysis: They enable scenario analysis by generating samples from the posterior predictive distribution under different assumptions or future conditions.

Weaknesses

  • Computational Intensity:
    • High Computational Cost: MCMC methods can be computationally intensive and time-consuming, particularly for large datasets or complex models.
    • Convergence Issues: Ensuring convergence of the MCMC chains can be challenging and may require careful tuning of the algorithm.
  • Complexity and Interpretation:
    • Model Complexity: MCMC-based models can be complex to specify and interpret, especially for users without a strong statistical background.
    • Software and Implementation: Implementing MCMC models may require specialized software and expertise.

ARIMA Models

Strengths

  • Simplicity and Interpretability:
    • Ease of Use: ARIMA models are relatively simple to specify and understand, making them accessible to a wide range of users.
    • Well-Established Framework: The ARIMA framework is well-established and widely used in time series analysis.
  • Efficiency:
    • Computationally Efficient: ARIMA models are generally less computationally demanding compared to MCMC-based models, making them suitable for large datasets and real-time applications.
    • Standard Software Availability: ARIMA models are available in most statistical software packages and are easy to implement.
  • Short-Term Forecasting:
    • Effective for Short-Term Forecasting: ARIMA models are effective for short-term forecasting, especially for time series data that exhibit clear autocorrelations and seasonality patterns.

Weaknesses

  • Model Assumptions:
    • Linear Relationships: ARIMA models assume linear relationships, which may not capture complex patterns in the data.
    • Gaussian Errors: The assumption of normally distributed errors may not hold for all time series data.
  • Limited Flexibility:
    • Fixed Structure: ARIMA models have a fixed structure and may not handle non-stationarity, structural breaks, or varying seasonality as effectively as MCMC-based models.
    • Inability to Incorporate Prior Information: ARIMA models do not incorporate prior knowledge or information, limiting their ability to integrate domain expertise.
  • Uncertainty Quantification:
    • Limited Uncertainty Quantification: While ARIMA models can provide confidence intervals for forecasts, they do not offer the same level of probabilistic uncertainty quantification as Bayesian models.

Conclusion

Both MCMC-based models and ARIMA models have their own strengths and weaknesses. MCMC-based models are more flexible and provide a comprehensive probabilistic framework, making them suitable for complex time series with non-linear patterns and hierarchical structures. However, they are computationally intensive and complex to implement.

ARIMA models, on the other hand, are simpler, computationally efficient, and effective for short-term forecasting of time series with clear autocorrelations and seasonality patterns. They are less flexible and may not capture complex relationships or provide detailed uncertainty quantification.

The choice between these models should be based on the specific characteristics of the data, the goals of the analysis, the available computational resources, and the level of expertise of the user.

Combining MCMC & ARIMA

Combining MCMC and ARIMA models involves leveraging the strengths of both approaches to create a more robust and flexible framework for time series analysis and forecasting. This combination can address the limitations of each method individually while enhancing the overall modeling capability.

MCMC and ARIMA Combined Models

1. Overview

  • ARIMA Component: ARIMA (AutoRegressive Integrated Moving Average) models are used to capture linear relationships in the time series, including trends and seasonality. ARIMA models are particularly effective for short-term forecasting and can handle data with autocorrelation and stationarity issues.
  • MCMC Component: Markov Chain Monte Carlo (MCMC) methods are used within a Bayesian framework to estimate the posterior distributions of model parameters. MCMC allows for more flexible modeling of uncertainty and can incorporate prior knowledge into the model.

2. Motivation for Combining MCMC and ARIMA

  • Enhanced Uncertainty Quantification: By using MCMC, the combined model can provide a full probabilistic description of uncertainty in the parameter estimates and forecasts, unlike traditional ARIMA models, which provide point estimates and approximate confidence intervals.
  • Flexibility in Modeling: MCMC allows for the incorporation of complex, non-linear relationships and hierarchical structures that ARIMA models alone might not capture effectively.
  • Incorporation of Prior Knowledge: Bayesian MCMC methods can incorporate prior distributions on the parameters, making the model more informative, especially when prior domain knowledge is available.

3. Methodology

To combine ARIMA and MCMC, the basic idea is to embed the ARIMA model within a Bayesian framework and use MCMC to estimate the model parameters. This involves:

  • Defining the ARIMA Model: Specify the ARIMA model structure (order of autoregression, integration, and moving average components).
  • Setting Up the Bayesian Framework: Define prior distributions for the ARIMA model parameters (e.g., AR coefficients, MA coefficients, noise variance).
  • Using MCMC for Parameter Estimation: Use MCMC techniques (e.g., Gibbs sampling, Metropolis-Hastings algorithm) to sample from the posterior distribution of the ARIMA model parameters.
Bayesian MCMC Mortality Forcast Plot
'bsts' Package
*** Grab N Go Info. Grab N Go Info (2022, Sept 15). Time Series Causal Impact Analysis in R | Machine Learning[Youtube Video]. https://youtu.be/aUbUlquTrCg
Exit full screenEnter Full screen

# This Python script produces the plot which can be found under the following URL: 
# https://christopher-germann.de/wp-content/uploads/mortality_rate_plot_filtered_excluded_highlighted_black.png
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.legend_handler import HandlerTuple

# Load data from CSV into a DataFrame
df = pd.read_csv('m.csv')

# Convert 'year' and 'week' to datetime index
df['week'] = df['week'].apply(lambda x: f'{x:02d}')  # Ensure week number has leading zero if necessary
df['date'] = pd.to_datetime(df['year'].astype(str) + df['week'].astype(str) + '-1', format='%Y%U-%w')
df.set_index('date', inplace=True)

# Filter out rows with zero mortality rate
df_filtered = df[df['mortality rate'] != 0.0]

# Exclude the last 4 values
df_filtered_excluded = df_filtered.iloc[:-4]

# Define milestones with updated colors, markers, and shapes
milestones = [
    {'week': '2020-12', 'case': 1055, 'label': 'Start of Pandemic', 'color': '#f535aa', 'marker': 'o', 'size': 300},  # Neon Pink
    {'week': '2020-52', 'case': 1095, 'label': 'Start of Vaccination Campaign', 'color': '#39ff14', 'marker': 's', 'size': 300},  # Neon Green
    {'week': '2023-35', 'case': 1235, 'label': 'End of PCR Testing', 'color': '#00d8ff', 'marker': '^', 'size': 300},  # Neon Blue
    {'week': '2023-43', 'case': 1243, 'label': 'End of Vaccination Data', 'color': '#ff9500', 'marker': 'D', 'size': 300}  # Neon Orange
]

# Plotting the filtered data excluding last 4 values
fig, ax = plt.subplots(figsize=(14, 7))

# Plot mortality rate with semi-transparent white markers
ax.plot(df_filtered_excluded.index, df_filtered_excluded['mortality rate'], marker='o', linestyle='-', color='white', label='Mortality Rate', alpha=0.5)

# Annotate milestones with adjusted scatter plot properties and drop shadows
for milestone in milestones:
    week_start = pd.to_datetime(milestone['week'] + '-1', format='%Y-%W-%w')
    if week_start in df_filtered_excluded.index:
        shadow_color = 'black'
        shadow_alpha = 0.5
        shadow_size = milestone['size'] * 1.5  # Adjust shadow size

        # Draw drop shadow
        shadow = ax.scatter(week_start, df_filtered_excluded.loc[week_start, 'mortality rate'],
                            color=shadow_color, marker=milestone['marker'], s=shadow_size,
                            alpha=shadow_alpha, zorder=4)

        # Draw milestone point with 0.9 opacity
        point = ax.scatter(week_start, df_filtered_excluded.loc[week_start, 'mortality rate'],
                           color=milestone['color'], marker=milestone['marker'], s=milestone['size'],
                           edgecolor='white', linewidth=0.5, label=milestone['label'], zorder=5, alpha=0.9)

        # Adjust z-order so that shadow is behind the milestone point
        point.set_zorder(shadow.get_zorder() + 1)

# Customize legend with smaller symbols and black background
legend_labels = [milestone['label'] for milestone in milestones]
legend_handles = [plt.Line2D([0], [0], marker=milestone['marker'], color='w', markerfacecolor=milestone['color'], markersize=9, label=milestone['label']) for milestone in milestones]

legend = ax.legend(handles=legend_handles, labels=legend_labels, loc='upper left', fontsize=12, handler_map={tuple: HandlerTuple(ndivide=None)})
legend.get_frame().set_facecolor('black')  # Set legend background color to black
for text in legend.get_texts():
    text.set_color('white')  # Set legend text color to white

# Format plot
ax.set_title('Mortality Rate Over Time (Filtered, excluding last 4 values)', color='white')
ax.set_xlabel('Date', color='white')
ax.set_ylabel('Mortality Rate', color='white')
ax.grid(True, color='gray')
ax.tick_params(axis='both', colors='white')
ax.set_facecolor('black')

# Adding a shaded background to indicate before and after vaccination campaign
start_vaccination_date = pd.to_datetime('2020-12-27', format='%Y-%m-%d')
end_vaccination_date = pd.to_datetime('2023-12-31', format='%Y-%m-%d')

ax.axvspan(start_vaccination_date, end_vaccination_date, color='gray', alpha=0.3)

# Save plot as PNG with transparent background
png_filename = 'mortality_rate_plot_filtered_excluded_highlighted.png'
fig.savefig(png_filename, transparent=True, bbox_inches='tight', dpi=300)
plt.close()

print(f"Plot has been saved as {png_filename}")

Dr. Christopher B. Germann | 28.06.2024

Spline animation

Top Skip to content