This document outlines the backup and restore procedures, as well as the Disaster Recovery (DR) plan for the Local Food AI stack.
Given the massive 3GB+ size of the OpenFoodFacts dataset, backing up the entire MySQL data volume dynamically is resource-intensive. The strategy is split into Code Backup and Data Backup.
The entire application infrastructure (Dockerfiles, Python scripts, configuration) is tracked in the Git repository.
Backup Command: git push origin main
Frequency: Triggered automatically by developers after every Sprint.
We use mysqldump to create a cold-standby physical backup of the user data and dietary profiles, while ignoring the massive, immutable OpenFoodFacts partition (which can be re-ingested from the source CSV).
Backup Command:
sudo docker exec food_project-mysql-1 mysqldump -u root -proot_pass food_db users user_health_profiles plate_items > /backup/food_db_users_$(date +%F).sql
Frequency: Daily via a server-side cron job (0 3 * * *).
If the database container crashes or the volumes are corrupted:
sudo docker-compose stop appRestore the user tables from the SQL dump:
cat /backup/food_db_users_2026-05-12.sql | sudo docker exec -i food_project-mysql-1 mysql -u root -proot_pass food_db
Restart the background ingestion script (./data_sync.sh) to rebuild the massive 3GB OpenFoodFacts products_core tables.
Restart the application: sudo docker-compose start app
If deploying in the distributed Multi-Hypervisor PoC environment (Hyper-V / VirtualBox / WSL):
app is engineered to gracefully catch LLM connection timeouts. If the VirtualBox Ollama node dies, the Streamlit app will continue to function for standard Database lookups, returning a safe fallback message for AI evaluations.