Ver Fonte

TG-439: Repository Cleanup - Removed scratch scripts and obsolete artifacts

lanfr144 há 3 dias atrás
pai
commit
39eb5570be
73 ficheiros alterados com 0 adições e 3073 exclusões
  1. 0 16
      AGENTS.md
  2. 0 30
      AI_History/Client_Presentation.md
  3. 0 38
      AI_History/Conversation_Log.md
  4. 0 17
      AI_History/Retrospective.md
  5. 0 33
      AI_History/implementation_plan.md
  6. 0 47
      AI_History/status_report.md
  7. BIN
      AI_History/taiga_backlog_creation.webp
  8. 0 26
      AI_History/task.md
  9. 0 40
      close_sprint10.py
  10. 0 86
      configure_zabbix_dependencies.py
  11. 0 90
      configure_zabbix_email.py
  12. 0 149
      configure_zabbix_teams.py
  13. 0 50
      create_bookmarks_page.py
  14. 0 19
      create_taiga_task.py
  15. 0 102
      create_zabbix_dashboard.py
  16. 0 71
      generate_taiga_wiki.py
  17. 0 1
      init_zabbix_db.sh
  18. 0 15
      legacy_scripts/check_projects.py
  19. 0 76
      legacy_scripts/convert_datatypes.py
  20. 0 23
      legacy_scripts/fetch_tasks.py
  21. 0 45
      legacy_scripts/gen_presentation.py
  22. 0 41
      legacy_scripts/reset_pwd.py
  23. 0 24
      legacy_scripts/taiga_checker.py
  24. 0 12
      legacy_scripts/taiga_closeout.py
  25. 0 31
      legacy_scripts/taiga_feed.py
  26. 0 18
      legacy_scripts/taiga_sprint4.py
  27. 0 40
      legacy_scripts/taiga_sprint4_deploy.py
  28. 0 17
      legacy_scripts/test_mail.py
  29. 0 21
      legacy_scripts/test_taiga.py
  30. 0 125
      project_text.txt
  31. 0 1
      reset_zabbix_db.sh
  32. 0 7
      retro_planning_text.txt
  33. 0 41
      scratch/audit_taiga.py
  34. 0 44
      scratch/close_sprint12.py
  35. 0 17
      scratch/read_pdfs.py
  36. 0 55
      scratch/setup_sprint12_taiga.py
  37. 0 56
      scratch/setup_sprint13_taiga.py
  38. 0 39
      scratch/test_search.py
  39. 0 26
      setup_mail_forwarding.sh
  40. 0 83
      setup_nginx_zabbix.py
  41. 0 19
      setup_postfix.sh
  42. 0 51
      setup_searxng.sh
  43. 0 62
      setup_sprint10_taiga.py
  44. 0 80
      setup_sprint11_taiga.py
  45. 0 60
      setup_sprint7_taiga.py
  46. 0 64
      setup_sprint8_taiga.py
  47. 0 13
      setup_unix_user.sh
  48. 0 65
      sync_current_sprint.py
  49. 0 79
      sync_taiga.py
  50. 0 39
      taiga_audit.py
  51. 0 91
      taiga_sync_fixer.py
  52. 0 27
      taiga_sync_fixes.py
  53. 0 73
      taiga_sync_sprint11_p2.py
  54. 0 10
      taiga_wiki/00_Epics.md
  55. 0 39
      taiga_wiki/Sprint_1.md
  56. 0 37
      taiga_wiki/Sprint_2.md
  57. 0 35
      taiga_wiki/Sprint_3.md
  58. 0 35
      taiga_wiki/Sprint_4.md
  59. 0 35
      taiga_wiki/Sprint_5.md
  60. 0 35
      taiga_wiki/Sprint_6.md
  61. 0 35
      taiga_wiki/Sprint_7.md
  62. 0 35
      taiga_wiki/Sprint_8.md
  63. 0 39
      taiga_wiki_260508.py
  64. 0 32
      taiga_wiki_bookmarks.py
  65. 0 67
      taiga_wiki_may07.py
  66. 0 38
      taiga_wiki_push.py
  67. 0 39
      taiga_wiki_rename.py
  68. 0 16
      task_plan.md
  69. 0 46
      test_login.py
  70. 0 3
      test_snmp.py
  71. 0 40
      update_bookmarks.py
  72. 0 55
      update_taiga_status.py
  73. 0 7
      wiki_links_test.py

+ 0 - 16
AGENTS.md

@@ -1,16 +0,0 @@
-# Instructions pour Antigravity
-
-## Stack Technique
-- **Langage :** Python 3.1x (Utiliser `venv` systématiquement).
-- **Scripts :** Shell (Bash/Zsh) pour l'automatisation.
-- **Gestion de Projet :** Taiga (Lien entre Git et Taiga actif).
-
-## Conventions de Code
-- **Python :** Respecter strictement la PEP 8. Utiliser des docstrings pour chaque fonction.
-- **Git :** Format de message obligatoire : `TG-<ID> #<status> : <description>` (ex: TG-42 #closed : Ajout de la validation email).
-- **Sécurité :** NE JAMAIS inclure de secrets dans le code. Utiliser `.env`.
-
-## Comportement Attendu
-1. Si une commande shell est risquée (`rm`, `chmod`), demande confirmation.
-2. Toujours mettre à jour `task_plan.md` après une étape majeure terminée.
-3. Toujours mettre à jour taiga

+ 0 - 30
AI_History/Client_Presentation.md

@@ -1,30 +0,0 @@
-# 🚀 Executive Project Update: Local Food AI Platform
-
-**To Our Valued Client,**
-
-We are thrilled to present the monumental progress achieved in the **Local Food AI Platform**. Your investment has successfully funded the transition of a conceptual idea into a highly secure, enterprise-grade Artificial Intelligence ecosystem. 
-
-Below is an executive summary of the value delivered during our most recent development cycles:
-
-## 🏦 1. Total Data Sovereignty & Security
-We have engineered an architecture that guarantees **100% Data Privacy**. Unlike consumer AI tools that leak confidential queries to the cloud:
-* **True Local Intelligence:** The Mistral AI neural network and your massive MySQL databases run entirely on isolated, air-gapped internal servers. No recipe, no search query, and no user profile ever leaves your corporate firewall.
-* **Encrypted Access:** We deployed heavy `bcrypt` cryptographic hashing to secure every user account against breaches.
-
-## 🧠 2. Autonomous Web Intelligence (SearXNG)
-To ensure the AI is never outdated, we successfully deployed an anonymous Docker-based metasearch proxy. If a user asks the AI about a brand-new medical ingredient not present in your databases, the AI recognizes the gap autonomously, covertly scrapes the internet without tracking, and instantly incorporates the live data to answer the question!
-
-## 🔬 3. The "Scientific Medical" User Interface
-We completely overhauled the front-end user experience to reflect luxury and scientific precision. 
-
-![Premium UI Dashboard Visualization](file:///C:/Users/lanfr144/.gemini/antigravity/brain/fa60b8a2-c1d5-4b3d-8ff2-f6588c78798f/premium_nutrition_dashboard_ui_1776925129649.png)
-
-* **Dynamic 'My Plate' Architecture:** Users can dynamically combine ingredients from a database of millions of entries. Our backend calculates compounding macro-totals (Protein, Fat, Carbs) in real-time, functioning as an enterprise diet tracker.
-* **Granular Data Search:** The platform boasts high-speed filtration algorithms, allowing practitioners to search exactly for criteria like *"Products with > 20g Protein and < 5g Sugar"*.
-
-## 🤖 4. The Prompt-Engineered Dietitian
-Most chatbots simply "talk". We implemented complex algorithmic *Prompt Engineering* to force the AI into acting as a highly structured Clinical Dietitian. The system now mathematically generates highly accurate, multi-day meal plans mapped directly to exact caloric and dietary constraints (Vegan, Keto, Omnivore) and outputs them strictly as professional Markdown data tables instead of loose text.
-
----
-**Return on Investment (ROI):** 
-Your financing has birthed a fully-scalable, premium-designed, highly secure platform capable of replacing thousands of dollars in cloud API costs while protecting intellectual property. The system is ready to revolutionize local nutritional analysis pipelines.

+ 0 - 38
AI_History/Conversation_Log.md

@@ -1,38 +0,0 @@
-# Local Food AI - Conversation & Action Log
-
-This document serves as a summary of the requests made by FrancoisLange and the actions successfully executed by the AI (Antigravity).
-
-## 1. Initial Project Scoping & Sprints Definition
-**User Request:**
-Provide user stories from the Scrum process for the backlog based on the Vision Statement ("A local food AI that provides full nutritional value..."). Explain what to do also. Evaluate LLMs to fit within Ubuntu 24.04 VM hardware limits (30GB RAM, 8 vCPUs) and define a public git structure (`git.btshub.lu`).
-
-**AI Action:**
-- Evaluated hardware constraints and proposed lightweight models (e.g., `mistral:7b-instruct-q4`, `llama3:8b-instruct-q4`) via Ollama.
-- Defined 6 comprehensive Epics with nested user stories.
-- Expanded those into a concrete **6-Sprint plan**, including **Acceptance Criteria (Definition of Done)** for every task.
-- Saved the structured Agile plan to `implementation_plan.md` and `task.md`.
-
-## 2. PROJECT_CONTEXT Alignment
-**User Request:**
-Check if the new 6-Sprint Local Food AI plan is compatible with `PROJECT_CONTEXT.md` and change it if necessary.
-
-**AI Action:**
-- Overwrote `PROJECT_CONTEXT.md` to perfectly merge the original "Streamlit UI + MySQL Backend" architecture with the new "Local Food AI" features.
-- Established strict rules focusing on privacy (no data leaves server).
-- Saved as `PROJECT_CONTEXT.md`.
-
-## 3. Taiga Automation & Backlog Setup
-**User Request:**
-Connect to Taiga (`https://192.168.130.161/taiga`) to store all the histories and sprints using the provided credentials (`FrancoisLange` / `BTSai123`).
-
-**AI Action:**
-- Deployed an internal Playwright web-automation subagent that bypassed local SSL warnings.
-- The subagent successfully logged into the local Taiga instance, created a Scrum project exactly named "**Local Food AI**".
-- Automatically populated the backlog via bulk insert with all 13 User Stories.
-- Established Sprints 1 through 6 on the Agile board and assigned preliminary stories.
-- **Recording Exported:** The video recording of the browser subagent executing this on Taiga was saved as an animated `.webp` format (`taiga_backlog_creation.webp`).
-
-## Attached Context
-- `taiga_backlog_creation.webp` (A video capture of the AI interacting with your Taiga interface!)
-- `implementation_plan.md` (The raw Sprint & Story backlog mapping)
-- `task.md` (The AI's internal task tracking state)

+ 0 - 17
AI_History/Retrospective.md

@@ -1,17 +0,0 @@
-# Agile Sprint Retrospective
-**Project:** Local Food AI Platform
-**Sprint Goal:** Secure Data Ingestion, Medical Expansion, and UI/UX Overhaul
-
-## 🏆 What Went Well
-* **Database Agility:** Transitioning from rigid SQL arrays to dynamic pandas DataFrame ingestion (`ingest_csv.py`) allowed us to process massive OpenFoodFacts schemas instantly without crashing.
-* **Privacy-First Architecture:** Successfully establishing an air-gapped system where the AI scraper (SearXNG) and the Large Language Model (Mistral) operate entirely locally proves extreme Corporate Data Sovereignty.
-* **Rapid Feature Integration:** Expanding the platform from a simple calculator to a full-fledged Clinical Profiler (incorporating Diabetes, Hypertension, and Pregnancy monitoring) was achieved incredibly fast using Pandas styling logic.
-
-## 🚧 What Went Wrong (Or Needed Improvement)
-* **Dataset Encoding Bugs:** The OpenFoodFacts CSV files contain heavy French datasets. Early ingestion attempts on Windows corrupted characters (`'Artichaut' -> 'Artichaut'`) due to OS-default rendering limitations over `utf-8`. This required an urgent hotfix in the data pipeline.
-* **Schema Scalability:** Constantly injecting new tables (`plates`, `user_profiles`) into `setup_db.py` without a formal migration tool (like Alembic) makes iterative DevOps slightly dangerous for live production data.
-
-## 🎯 Action Items for Next Sprint
-* Implement a formal database schema migration tool (Flyway or Alembic) to prevent data loss during continuous integration.
-* Optimize the SQL parsing speed by adding specific integer boundaries to the B-TREE indexes.
-* Deploy an actual external SMTP server (e.g., Postfix/Sendgrid) to fully operationalize the mocked password-reset pipeline.

+ 0 - 33
AI_History/implementation_plan.md

@@ -1,33 +0,0 @@
-# Premium UI Overhaul & "My Plate" Combinations Plan
-
-Now that our backend is perfectly scaled, we need to focus heavily on the **Frontend Experience** to completely conquer User Stories `#5, #6, #7,` and `#8`. The goal is to evolve the currently simple Streamlit layout into a stunning, glassmorphic, premium "Web Application" feel, while unlocking the ability to save custom food combinations.
-
-## User Review Required
-
-Because Streamlit natively lacks advanced multi-table relational persistence, we must add new tables to MySQL to save a user's food lists permanently across sessions. **Are you okay with me modifying `setup_db.py` to add `plates` and `plate_items` tables, and does the proposed Premium UI style match your vision?**
-
-## Proposed Changes
-
-### 1. Database Persistence ("My Plates")
-We will add two cleanly structured tables right after the `users` table logic:
-- **`plates`**: Stores `id`, `user_id` (foreign key), and `plate_name`.
-- **`plate_items`**: Stores `id`, `plate_id` (foreign key), `product_code`, and `grams`.
-*This solves Story #8 perfectly without breaking existing data.*
-
-### 2. Premium Aesthetics & Logic Overhaul
-**Premium CSS Styling:** I will inject a massive `<style>` block to enforce a **curated dark mode**, smooth gradients, glassmorphic container aesthetics, modern typography *(e.g., Google's 'Inter')*, and micro-animations on interactive elements to ensure the project looks like an absolute state-of-the-art enterprise app.
-
-**Nutritional Search Filters (Story #6):** 
-I will add sleek Streamlit sliders and multi-select dropdowns to the "Raw Data Search" tab. Instead of just searching by name, you will be able to say: *"Show me foods with > 20g Protein and < 5g Sugar, sorted by energy."*
-
-**My Plate Tab (Story #7):** 
-I will build a dedicated 3rd Tab called "🍽️ My Plates" where users can:
-- Create named plates (e.g., "Post-Workout Meal").
-- Add searched foods directly to their active plate.
-- Define the gram quantity for each item.
-- The app will dynamically sum up the combined macro totals (Proteins, Carbs, Fats) across the entire plate locally using a Pandas aggregation over the grabbed SQL data!
-
-## Open Questions
-
-1. **Macro Priorities:** Are there specific macro nutrients (like Energy, Proteins, Fat, Sugars, Salt) that you want explicitly highlighted when viewing a "Combined Nutritional Value Overview" of a Plate, or should I attempt to dynamically graph as many as possible?
-2. **Visual Theme:** Do you prefer a vibrant "Cyberpunk Dark Mode" or a more elegant "Sleek Dark Medical/Scientific" aesthetic with softer blues and greens?

+ 0 - 47
AI_History/status_report.md

@@ -1,47 +0,0 @@
-# 🏆 Synthèse Agile & Wiki SCRUM
-
-Voici le compte-rendu officiel du projet **Local Food AI**, structuré pour répondre aux exigences des rituels Scrum (Daily, Review, Planning) et pour alimenter directement votre Wiki Taiga.
-
----
-
-## 1. 🌅 Le Daily (Où en sommes-nous ?)
-**Statut Actuel :** 
-Le socle applicatif est à 90% terminé. L'infrastructure de base (MySQL, Ubuntu, Docker, Ollama) est parfaitement stable, le pipeline d'intégration Git/Taiga via Webhook est fonctionnel, et l'interface utilisateur (UI) vient de subir une refonte technologique massive. Il ne reste techniquement qu'une seule "Epic/User Story" majeure dans notre Backlog.
-
----
-
-## 2. 🔍 La Sprint Review (Qu'avons-nous fait hier ?)
-Lors du dernier Sprint de développement continu, nous avons validé les User Stories **#5, #6, #7, et #8**. 
-
-**Réalisations Techniques et Démontrables :**
-* **Refonte "Scientific Medical" (Frontend) :** Injection de CSS avancé dans `app.py` pour basculer Streamlit vers un design "Dark Mode" Premium, utilisant la police Inter, des dégradés bleus/cyan, et des effets "Glassmorphism".
-* **Filtres Avancés (SQL/Backend) :** Création de 4 sliders interactifs (Protéines, Lipides, Glucides, Sucres) modifiant dynamiquement la clause `WHERE ... AND protéines >= X` de la base MySQL.
-* **Architecture "My Plate" (Database) :** Modification sécurisée de `setup_db.py` pour générer automatiquement deux nouvelles tables relationnelles (`plates` et `plate_items`). Ces tables utilisent des clefs étrangères (Foreign Keys) pour lier les aliments directement au `user_id` de la session.
-* **Algorithme d'Agrégation (Logique Data) :** Intégration d'une logique en Python/Pandas calculant et additionnant instantanément les macros (Protéines, Graisses, Carbs) de tous les aliments présents dans une assiette virtuelle.
-* *Toutes ces modifications ont été commitées sur Gogs avec succès, déclenchant le Webhook vers Taiga (Tasks #23, #24, #26, #27).*
-
----
-
-## 3. 🎯 Le Sprint Planning (Qu'allons-nous faire ?)
-**Prochain Objectif :** Construire la **User Story #11 (AI Menu Proposals)**.
-
-**Tâches prévues (Sprint Backlog) :**
-1. Créer une nouvelle section/tab dans le code pour la génération de menus.
-2. Concevoir un algorithme de "Prompt Engineering" très spécifique qui imposera à **Mistral** des contraintes strictes.
-3. Câbler la demande de l'utilisateur (ex: "Je veux un menu à 2000 kcal riche en protéines") avec la base de données SQL locale pour fournir de vrais exemples au LLM, afin qu'il propose un menu concret et non inventé.
-4. Finaliser les play-tests finaux sur la VM Ubuntu.
-
----
-
-## 4. 📚 Ce que tu dois mettre dans le Wiki SCRUM (Taiga)
-Copiez-collez ces blocs dans votre Wiki Taiga pour prouver la maîtrise technique du projet :
-
-### 🏛️ Architecture & Technologies
-* **Frontend :** Framework **Streamlit** (Python) surchargé par du CSS natif injecté via `st.markdown(unsafe_allow_html=True)` pour garantir une esthétique "Scientific Medical" (Focalisation UX/UI Premium).
-* **Backend Intelligence :** Intégration native de l'API **Ollama (modèle Mistral)** avec le concept de *Tool/Function Calling* pour scraper anonymement le Web via un conteneur local **SearXNG** sur le port `8080`.
-* **Database Pipeline :** Injection dynamique et asynchrone des données CSV ouvertes via Pandas vers MySQL. Abandon des schémas SQL rigides au profit de l'auto-génération des 200 colonnes via l'ORM.
-* **Sécurité & Accès :** Mise en place d'un modèle **PoLP** (Principle of Least Privilege). L'application gère nativement le HMAC (via `bcrypt`) et le script `setup_db.py` octroie des droits granulaires (ex: `IDENTIFIED BY ... GRANT SELECT, INSERT... TO 'db_app_auth'`).
-
-### 🔄 DevOps & Déploiement
-* Le CI/CD rudimentaire repose sur une intégration **Gogs -> Taiga**. Chaque commit (ex: `TG-23`) documente automatiquement la carte Agile via Webhook.
-* Le système est déployable via le script unifié `deploy.sh` (qui gère l'environnement virtuel Python) et `setup_searxng.sh` (qui gère l'orchestration Docker).

BIN
AI_History/taiga_backlog_creation.webp


+ 0 - 26
AI_History/task.md

@@ -1,26 +0,0 @@
-# Local Food AI - Task Breakdown
-
-- [x] Update PROJECT_CONTEXT.md to merge original architecture with new features
-- [x] Detailing the Sprints in Planning Mode
-- [x] Await user approval on Implementation Plan
-- [x] Integrate Project and Backlog into Taiga
-- [ ] **Execute Sprint 1: Foundation & Authentication**
-  - [ ] Initialize Git Repo at `git.btshub.lu` and push `.gitignore`
-  - [ ] Add `evegi144` as project collaborator
-  - [ ] Finalize `deploy.sh` and Docker setup
-  - [ ] Complete `init.sql` for MySQL Users table
-  - [ ] Build basic Streamlit Login App
-- [ ] **Execute Sprint 2: Core Nutritional Database**
-  - [ ] Import food CSV via Pandas
-  - [ ] Implement search views in Streamlit
-- [ ] **Execute Sprint 3: Food Combinations**
-  - [ ] Build math aggregation logic
-  - [ ] Link combos to MySQL profiles
-- [ ] **Execute Sprint 4: AI & Chat**
-  - [ ] Pull and test quantized Ollama models
-  - [ ] Build Streamlit chat UI
-- [ ] **Execute Sprint 5: AI Menu Proposals & Web Search**
-  - [ ] Write RAG integration
-  - [ ] Configure local proxy web search tool
-- [ ] **Execute Sprint 6: Polish & Documentation**
-  - [ ] Thorough QA and README rewrite

+ 0 - 40
close_sprint10.py

@@ -1,40 +0,0 @@
-import requests
-import urllib3
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-
-base_url = 'https://192.168.130.161/taiga/api/v1'
-auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-
-proj_id = 21
-
-# Get Closed status IDs
-us_statuses = requests.get(f'{base_url}/userstory-statuses?project={proj_id}', headers=headers, verify=False).json()
-task_statuses = requests.get(f'{base_url}/task-statuses?project={proj_id}', headers=headers, verify=False).json()
-
-closed_us_status = next((s['id'] for s in us_statuses if s['is_closed']), None)
-closed_task_status = next((s['id'] for s in task_statuses if s['is_closed']), None)
-
-# 2. Find Sprint 10
-milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
-sprint10 = next((m for m in milestones if m['name'] == 'Sprint 10'), None)
-
-if sprint10:
-    print(f"Closing User Stories and Tasks in Sprint 10 (ID: {sprint10['id']})")
-    
-    # Update User Stories
-    us_list = sprint10.get('user_stories', [])
-    for us in us_list:
-        payload = {"status": closed_us_status, "version": us['version']}
-        requests.patch(f'{base_url}/userstories/{us["id"]}', json=payload, headers=headers, verify=False)
-        print(f"Closed User Story: {us['subject']} (ID: {us['id']})")
-    
-    # Update Tasks
-    tasks = requests.get(f'{base_url}/tasks?project={proj_id}&milestone={sprint10["id"]}', headers=headers, verify=False).json()
-    for task in tasks:
-        payload = {"status": closed_task_status, "version": task['version']}
-        requests.patch(f'{base_url}/tasks/{task["id"]}', json=payload, headers=headers, verify=False)
-        print(f"Closed Task: {task['subject']} (ID: {task['id']})")
-else:
-    print("Sprint 10 not found.")

+ 0 - 86
configure_zabbix_dependencies.py

@@ -1,86 +0,0 @@
-import requests
-import json
-import time
-
-ZABBIX_API_URL = "http://localhost:8081/api_jsonrpc.php"
-ZABBIX_USER = "Admin"
-ZABBIX_PASSWORD = "zabbix" # Default zabbix admin password
-
-def authenticate():
-    payload = {
-        "jsonrpc": "2.0",
-        "method": "user.login",
-        "params": {
-            "user": ZABBIX_USER,
-            "password": ZABBIX_PASSWORD
-        },
-        "id": 1,
-        "auth": None
-    }
-    response = requests.post(ZABBIX_API_URL, json=payload).json()
-    if 'result' in response:
-        return response['result']
-    else:
-        print(f"Authentication failed: {response}")
-        return None
-
-def get_triggers(auth_token, description_search):
-    payload = {
-        "jsonrpc": "2.0",
-        "method": "trigger.get",
-        "params": {
-            "output": ["triggerid", "description"],
-            "search": {
-                "description": description_search
-            }
-        },
-        "id": 2,
-        "auth": auth_token
-    }
-    response = requests.post(ZABBIX_API_URL, json=payload).json()
-    return response.get('result', [])
-
-def set_dependency(auth_token, trigger_id, depends_on_trigger_id):
-    payload = {
-        "jsonrpc": "2.0",
-        "method": "trigger.update",
-        "params": {
-            "triggerid": trigger_id,
-            "dependencies": [
-                {"triggerid": depends_on_trigger_id}
-            ]
-        },
-        "id": 3,
-        "auth": auth_token
-    }
-    response = requests.post(ZABBIX_API_URL, json=payload).json()
-    if 'result' in response:
-        print(f"Successfully added dependency! Trigger {trigger_id} now depends on {depends_on_trigger_id}")
-    else:
-        print(f"Failed to add dependency: {response}")
-
-if __name__ == "__main__":
-    print("Waiting for Zabbix server to start...")
-    time.sleep(10) # Simple wait
-    
-    try:
-        auth_token = authenticate()
-        if not auth_token:
-            print("Cannot proceed without authentication.")
-            exit(1)
-            
-        # Example logic to find DB and App triggers (Names will depend on actual Zabbix config)
-        db_triggers = get_triggers(auth_token, "MySQL is down")
-        app_triggers = get_triggers(auth_token, "Application Food AI Down")
-        
-        if not db_triggers or not app_triggers:
-            print("Could not find the necessary triggers. They might need to be created first in Zabbix.")
-            print(f"DB Triggers found: {db_triggers}")
-            print(f"App Triggers found: {app_triggers}")
-        else:
-            db_trigger_id = db_triggers[0]['triggerid']
-            app_trigger_id = app_triggers[0]['triggerid']
-            set_dependency(auth_token, app_trigger_id, db_trigger_id)
-            
-    except Exception as e:
-        print(f"Error configuring Zabbix: {e}")

+ 0 - 90
configure_zabbix_email.py

@@ -1,90 +0,0 @@
-import requests
-import json
-import os
-
-ZABBIX_API_URL = "http://localhost:8081/api_jsonrpc.php"
-ZABBIX_USER = "Admin"
-ZABBIX_PASSWORD = "zabbix"
-
-def get_email_from_env():
-    if os.path.exists('.env'):
-        with open('.env', 'r') as f:
-            for line in f:
-                if line.startswith('ADMIN_EMAIL='):
-                    return line.strip().split('=', 1)[1]
-    return "lanfr144@gmail.com" # Default fallback
-
-def authenticate():
-    payload = {"jsonrpc": "2.0", "method": "user.login", "params": {"username": ZABBIX_USER, "password": ZABBIX_PASSWORD}, "id": 1}
-    try:
-        response = requests.post(ZABBIX_API_URL, json=payload).json()
-        print(f"Debug: Zabbix API Auth Response: {response}")
-        return response.get('result')
-    except Exception as e:
-        print(f"Error connecting to Zabbix API: {e}")
-        return None
-
-def configure_email(auth_token, email_address):
-    # 1. Update Admin User (ID 1) Media
-    print(f"Configuring Admin user to receive alerts at: {email_address}")
-    user_payload = {
-        "jsonrpc": "2.0",
-        "method": "user.update",
-        "params": {
-            "userid": "1",
-            "medias": [
-                {
-                    "mediatypeid": "1", # Default Email media type
-                    "sendto": [email_address],
-                    "active": 0, # Enabled
-                    "severity": 63, # All severities
-                    "period": "1-7,00:00-24:00"
-                }
-            ]
-        },
-        "id": 2,
-        "auth": auth_token
-    }
-    res = requests.post(ZABBIX_API_URL, json=user_payload).json()
-    if 'result' in res:
-        print("User media successfully updated.")
-    else:
-        print(f"Failed to update user media: {res}")
-        
-    # 2. Enable "Report problems to Zabbix administrators" Action
-    # Usually ID 2 or 3. Let's find it.
-    action_search = {
-        "jsonrpc": "2.0",
-        "method": "action.get",
-        "params": {
-            "output": ["actionid", "name"],
-            "search": {"name": "Report problems to Zabbix administrators"}
-        },
-        "id": 3,
-        "auth": auth_token
-    }
-    actions = requests.post(ZABBIX_API_URL, json=action_search).json().get('result', [])
-    if actions:
-        action_id = actions[0]['actionid']
-        action_enable = {
-            "jsonrpc": "2.0",
-            "method": "action.update",
-            "params": {"actionid": action_id, "status": 0}, # 0 is enabled
-            "id": 4,
-            "auth": auth_token
-        }
-        res_act = requests.post(ZABBIX_API_URL, json=action_enable).json()
-        if 'result' in res_act:
-            print(f"Alert Action '{actions[0]['name']}' successfully enabled.")
-        else:
-            print(f"Failed to enable action: {res_act}")
-    else:
-        print("Could not find default action 'Report problems to Zabbix administrators' to enable.")
-
-if __name__ == "__main__":
-    email = get_email_from_env()
-    token = authenticate()
-    if token:
-        configure_email(token, email)
-    else:
-        print("Could not authenticate to Zabbix on localhost:8081.")

+ 0 - 149
configure_zabbix_teams.py

@@ -1,149 +0,0 @@
-import requests
-import json
-import os
-
-ZABBIX_API_URL = "http://zabbix-web:8080/api_jsonrpc.php"
-ZABBIX_USER = "Admin"
-ZABBIX_PASSWORD = "zabbix"
-TEAMS_WEBHOOK_URL = "https://webhookbot.c-toss.com/api/bot/webhooks/7accc381-ae55-423c-9c08-6764c2813c8a"
-
-def authenticate():
-    payload = {"jsonrpc": "2.0", "method": "user.login", "params": {"username": ZABBIX_USER, "password": ZABBIX_PASSWORD}, "id": 1}
-    try:
-        response = requests.post(ZABBIX_API_URL, json=payload).json()
-        return response.get('result')
-    except Exception as e:
-        print(f"Error connecting to Zabbix API: {e}")
-        return None
-
-def configure_teams(auth_token):
-    print("Checking if MS Teams Webhook Media Type already exists...")
-    check_payload = {
-        "jsonrpc": "2.0",
-        "method": "mediatype.get",
-        "params": {
-            "output": ["mediatypeid", "name"],
-            "search": {"name": "MS Teams Custom Webhook"}
-        },
-        "id": 2,
-        "auth": auth_token
-    }
-    existing = requests.post(ZABBIX_API_URL, json=check_payload).json().get('result', [])
-    
-    if existing:
-        media_type_id = existing[0]['mediatypeid']
-        print(f"Media Type already exists with ID {media_type_id}. Updating it...")
-        update_payload = {
-            "jsonrpc": "2.0",
-            "method": "mediatype.update",
-            "params": {
-                "mediatypeid": media_type_id,
-                "parameters": [
-                    { "name": "URL", "value": TEAMS_WEBHOOK_URL },
-                    { "name": "Message", "value": "{ALERT.SUBJECT}\\n{ALERT.MESSAGE}" }
-                ]
-            },
-            "id": 3,
-            "auth": auth_token
-        }
-        requests.post(ZABBIX_API_URL, json=update_payload)
-    else:
-        print("Creating MS Teams Webhook Media Type...")
-        create_payload = {
-            "jsonrpc": "2.0",
-            "method": "mediatype.create",
-            "params": {
-                "type": 4, # 4 = Webhook
-                "name": "MS Teams Custom Webhook",
-                "script": "var req = new HttpRequest(); req.addHeader('Content-Type: application/json'); var params = JSON.parse(value); var payload = {'text': params.Message}; var resp = req.post(params.URL, JSON.stringify(payload)); Zabbix.log(4, '[Teams Webhook] response: ' + resp); return 'OK';",
-                "parameters": [
-                    { "name": "URL", "value": TEAMS_WEBHOOK_URL },
-                    { "name": "Message", "value": "{ALERT.SUBJECT}\\n{ALERT.MESSAGE}" }
-                ]
-            },
-            "id": 4,
-            "auth": auth_token
-        }
-        res = requests.post(ZABBIX_API_URL, json=create_payload).json()
-        if 'result' in res:
-            media_type_id = res['result']['mediatypeids'][0]
-            print(f"Created Media Type with ID: {media_type_id}")
-        else:
-            print(f"Failed to create Media Type: {res}")
-            return
-
-    # Assign to Admin user
-    print("Assigning Teams Webhook to Admin User...")
-    user_payload = {
-        "jsonrpc": "2.0",
-        "method": "user.update",
-        "params": {
-            "userid": "1",
-            "medias": [
-                {
-                    "mediatypeid": "1", # Email
-                    "sendto": ["lanfr144@gmail.com"],
-                    "active": 0,
-                    "severity": 63,
-                    "period": "1-7,00:00-24:00"
-                },
-                {
-                    "mediatypeid": media_type_id, # Teams
-                    "sendto": ["teams"],
-                    "active": 0,
-                    "severity": 63,
-                    "period": "1-7,00:00-24:00"
-                }
-            ]
-        },
-        "id": 5,
-        "auth": auth_token
-    }
-    res = requests.post(ZABBIX_API_URL, json=user_payload).json()
-    if 'result' in res:
-        print("User media successfully updated to include Teams.")
-    else:
-        print(f"Failed to update user media: {res}")
-
-    # Ensure action is enabled
-    print("Enabling Report problems to Zabbix administrators action...")
-    action_search = {
-        "jsonrpc": "2.0",
-        "method": "action.get",
-        "params": {
-            "output": ["actionid", "name"],
-            "search": {"name": "Report problems to Zabbix administrators"}
-        },
-        "id": 6,
-        "auth": auth_token
-    }
-    actions = requests.post(ZABBIX_API_URL, json=action_search).json().get('result', [])
-    if actions:
-        action_id = actions[0]['actionid']
-        action_enable = {
-            "jsonrpc": "2.0",
-            "method": "action.update",
-            "params": {"actionid": action_id, "status": 0},
-            "id": 7,
-            "auth": auth_token
-        }
-        requests.post(ZABBIX_API_URL, json=action_enable)
-        print("Action verified and enabled.")
-        
-        # Send a test message
-        test_payload = {
-            "text": "Hello World, From Zabbix Automated Webhook Script!"
-        }
-        print("Sending test message to Teams...")
-        try:
-            requests.post(TEAMS_WEBHOOK_URL, json=test_payload)
-            print("Test message sent!")
-        except Exception as e:
-            print(f"Failed to send test message: {e}")
-
-if __name__ == "__main__":
-    token = authenticate()
-    if token:
-        configure_teams(token)
-    else:
-        print("Failed to authenticate to Zabbix API.")

+ 0 - 50
create_bookmarks_page.py

@@ -1,50 +0,0 @@
-import requests
-import urllib3
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-base_url = 'https://192.168.130.161/taiga/api/v1'
-auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-proj_id = 21
-
-slug = 'bookmarks'
-
-content = """# BOOKMARKS
-
-- [26.05.07 DAILY](260507-daily)
-- [26.05.07 REVIEW](260507-review)
-- [26.05.07 RETROSPECTIVE](260507-retrospective)
-- [26.05.07 PLAN](260507-plan)
-
-- [26.05.08 DAILY](260508-daily)
-- [26.05.08 REVIEW](260508-review)
-- [26.05.08 RETROSPECTIVE](260508-retrospective)
-- [26.05.08 PLAN](260508-plan)
-"""
-
-check_req = requests.get(f'{base_url}/wiki?project={proj_id}&slug={slug}', headers=headers, verify=False)
-if check_req.status_code == 200:
-    wiki_pages = check_req.json()
-    if len(wiki_pages) > 0:
-        page_id = wiki_pages[0]['id']
-        version = wiki_pages[0]['version']
-        payload = {
-            "project": proj_id,
-            "slug": slug,
-            "content": content,
-            "version": version
-        }
-        res = requests.put(f'{base_url}/wiki/{page_id}', json=payload, headers=headers, verify=False)
-        print("Updated bookmarks page!")
-        exit()
-
-payload = {
-    "project": proj_id,
-    "slug": slug,
-    "content": content
-}
-res = requests.post(f'{base_url}/wiki', json=payload, headers=headers, verify=False)
-if res.status_code == 201:
-    print("Created bookmarks page!")
-else:
-    print(f"Failed to create bookmarks: {res.text}")

+ 0 - 19
create_taiga_task.py

@@ -1,19 +0,0 @@
-import requests, urllib3
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-base_url = 'https://192.168.130.161/taiga/api/v1'
-auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-proj_id = 21
-
-milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
-sprint8 = next((m for m in milestones if m['name'] == 'Sprint 8'), None)
-sprint_id = sprint8['id'] if sprint8 else None
-
-payload = {"project": proj_id, "subject": "Deep System Overhaul Phase 3", "description": "Fix Clinical Search Crash, Plate Builder UI, and AI Meal Planner JSON parsing.", "milestone": sprint_id}
-res = requests.post(f'{base_url}/userstories', json=payload, headers=headers, verify=False).json()
-us_id = res['id']
-print(f"Created US: TG-{res['ref']}")
-
-t_payload = {"project": proj_id, "subject": "Execute Phase 3 Overhaul", "user_story": us_id, "milestone": sprint_id}
-t_res = requests.post(f'{base_url}/tasks', json=t_payload, headers=headers, verify=False).json()
-print(f"Created Task: TG-{t_res['ref']}")

+ 0 - 102
create_zabbix_dashboard.py

@@ -1,102 +0,0 @@
-import requests
-
-ZABBIX_API_URL = "http://192.168.130.170:8081/api_jsonrpc.php"
-ZABBIX_USER = "Admin"
-ZABBIX_PASSWORD = "zabbix"
-
-def authenticate():
-    payload = {
-        "jsonrpc": "2.0",
-        "method": "user.login",
-        "params": {"username": ZABBIX_USER, "password": ZABBIX_PASSWORD},
-        "id": 1
-    }
-    try:
-        response = requests.post(ZABBIX_API_URL, json=payload).json()
-        return response.get('result')
-    except Exception as e:
-        print(f"Error connecting to Zabbix API: {e}")
-        return None
-
-def get_snmp_item_id(auth_token):
-    # Retrieve the SNMP fallback item which catches all traps
-    payload = {
-        "jsonrpc": "2.0",
-        "method": "item.get",
-        "params": {
-            "output": ["itemid", "name", "key_"],
-            "search": {
-                "key_": "snmptrap.fallback"
-            }
-        },
-        "id": 2,
-        "auth": auth_token
-    }
-    response = requests.post(ZABBIX_API_URL, json=payload).json()
-    items = response.get('result', [])
-    if items:
-        print(f"Found SNMP Trap Item: {items[0]['itemid']}")
-        return items[0]['itemid']
-    else:
-        # Fallback to search any SNMP trap item
-        payload["params"]["search"] = {"key_": "snmptrap["}
-        response = requests.post(ZABBIX_API_URL, json=payload).json()
-        items = response.get('result', [])
-        if items:
-            print(f"Found specific SNMP Trap Item: {items[0]['itemid']}")
-            return items[0]['itemid']
-    print("Could not find any SNMP Trap item in Zabbix.")
-    return None
-
-def create_dashboard(auth_token, item_id):
-    print("Creating Food AI RAG Telemetry Dashboard...")
-    
-    # A Plaintext widget needs the itemid passed correctly in fields
-    widget_fields = []
-    if item_id:
-        widget_fields = [
-            {"type": 4, "name": "itemids", "value": item_id},
-            {"type": 0, "name": "show_lines", "value": 25}
-        ]
-
-    payload = {
-        "jsonrpc": "2.0",
-        "method": "dashboard.create",
-        "params": {
-            "name": "Food AI RAG Telemetry (Live)",
-            "userid": "1",
-            "pages": [
-                {
-                    "name": "SNMP Trap Activity",
-                    "widgets": [
-                        {
-                            "type": "plaintext",
-                            "name": "Ingestion Row Count Log",
-                            "x": 0, "y": 0, "width": 12, "height": 8,
-                            "fields": widget_fields
-                        },
-                        {
-                            "type": "systeminfo",
-                            "name": "Server Status",
-                            "x": 12, "y": 0, "width": 12, "height": 8
-                        }
-                    ]
-                }
-            ]
-        },
-        "id": 3,
-        "auth": auth_token
-    }
-    response = requests.post(ZABBIX_API_URL, json=payload).json()
-    if 'result' in response:
-        print(f"Dashboard Created successfully! ID: {response['result']['dashboardids'][0]}")
-    else:
-        print(f"Failed to create dashboard: {response}")
-
-if __name__ == "__main__":
-    token = authenticate()
-    if token:
-        item_id = get_snmp_item_id(token)
-        create_dashboard(token, item_id)
-    else:
-        print("❌ Could not authenticate to Zabbix. Ensure the server is fully started on port 8081.")

+ 0 - 71
generate_taiga_wiki.py

@@ -1,71 +0,0 @@
-import os
-from datetime import datetime, timedelta
-
-os.makedirs('taiga_wiki', exist_ok=True)
-
-start_date = datetime(2026, 4, 16)
-sprints = 8
-points_per_sprint = 1000
-
-# 00_Epics.md
-with open('taiga_wiki/00_Epics.md', 'w') as f:
-    f.write("# Project Epics\n\n1. Environment & Infrastructure Setup\n2. Database Schema & User Security\n3. Data Ingestion Pipeline\n4. Advanced Text & Context Search\n5. Local LLM Integration (Ollama)\n6. Streamlit Chat Interface Development\n7. Testing & Refinement\n8. Production Deployment\n")
-
-for i in range(1, sprints + 1):
-    sprint_start = start_date + timedelta(weeks=i-1)
-    sprint_end = sprint_start + timedelta(days=6)
-    sprint_str = f"Sprint_{i}"
-    
-    file_path = f"taiga_wiki/Sprint_{i}.md"
-    with open(file_path, 'w') as f:
-        f.write(f"# Sprint {i}\n\n")
-        f.write(f"**Sprint Tag**: {sprint_str}\n")
-        f.write(f"**Story Points**: {points_per_sprint}\n")
-        f.write(f"**Members**: francois, evegi144\n\n")
-        
-        # Sprint Planning
-        f.write(f"## {sprint_start.strftime('%Y/%m/%d')} Planning\n")
-        if i == 1:
-            f.write("- [x] Initialize Git Repo and configure AI History context.\n")
-            f.write("- [x] Setup Taiga Wiki and Backlog generation.\n")
-            f.write("- [x] Finalize `deploy.sh` and Database Setup (`init.sql`, `setup_db.py`).\n")
-            f.write("- [x] Data Ingestion Pipeline (`ingest_csv.py`, `convert_datatypes.py`).\n")
-            f.write("- [x] Build basic Streamlit Base App (`app.py`).\n\n")
-        elif i == 2:
-            f.write("- [x] Execute Sprint 2: Core Nutritional Database.\n")
-            f.write("- [x] Test and verify `ingest_csv.py` for CSV Pandas imports.\n")
-            f.write("- [x] Implement the Database Search tab in the Streamlit UI (`app.py`).\n\n")
-        else:
-            f.write("- Planning notes...\n\n")
-        
-        # Daily Scrums
-        for d in range(5):
-            day_date = sprint_start + timedelta(days=d)
-            f.write(f"### {day_date.strftime('%Y/%m/%d')} Daily Scrum\n")
-            f.write("- **evegi144**: \n")
-            if i == 1 and d == 0:
-                f.write("- **francois**: Set up git, database, and ingestion scripts.\n\n")
-            elif i == 2 and d == 0:
-                f.write("- **francois**: Verified Streamlit search views and Pandas ingestion pipeline.\n\n")
-            else:
-                f.write("- **francois**: \n\n")
-            
-        # Sprint Review
-        f.write(f"## {sprint_end.strftime('%Y/%m/%d')} Review\n")
-        if i == 1:
-            f.write("- **Review**: Successfully pushed all foundational files to Git and configured DB schemas.\n\n")
-        elif i == 2:
-            f.write("- **Review**: Data ingestion strategy and Streamlit search features are fully coded and finalized ahead of schedule.\n\n")
-        else:
-            f.write("- Review notes...\n\n")
-        
-        # Sprint Retrospective
-        f.write(f"## {sprint_end.strftime('%Y/%m/%d')} Retrospective\n")
-        if i == 1:
-            f.write("- **Retrospective**: Good velocity. Environment setup went smoothly.\n\n")
-        elif i == 2:
-            f.write("- **Retrospective**: Extremely efficient. By pre-building `app.py` search logic during Sprint 1, Sprint 2 was completed seamlessly.\n\n")
-        else:
-            f.write("- Retrospective notes...\n\n")
-
-print("Files generated successfully in taiga_wiki/")

+ 0 - 1
init_zabbix_db.sh

@@ -1 +0,0 @@
-mysql -e "CREATE DATABASE IF NOT EXISTS zabbix character set utf8mb4 collate utf8mb4_bin; CREATE USER IF NOT EXISTS 'zabbix'@'%' IDENTIFIED BY 'zabbix_pwd'; GRANT ALL PRIVILEGES ON zabbix.* TO 'zabbix'@'%'; SET GLOBAL log_bin_trust_function_creators = 1; FLUSH PRIVILEGES;"

+ 0 - 15
legacy_scripts/check_projects.py

@@ -1,15 +0,0 @@
-import requests
-import urllib3
-urllib3.disable_warnings()
-
-auth = requests.post(
-    'https://192.168.130.161/taiga/api/v1/auth', 
-    json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, 
-    verify=False
-).json()
-
-headers = {'Authorization': f'Bearer {auth["auth_token"]}'}
-projs = requests.get('https://192.168.130.161/taiga/api/v1/projects', headers=headers, verify=False).json()
-print("Projects:")
-for p in projs:
-    print(f"ID: {p['id']}, Name: {p['name']}, Slug: {p['slug']}")

+ 0 - 76
legacy_scripts/convert_datatypes.py

@@ -1,76 +0,0 @@
-import pymysql
-import pandas as pd
-import getpass
-
-def detect_and_convert_types():
-    print("Welcome to the Data Types Optimizer.")
-    print("WARNING: This modifies your database schemas. You must authenticate as the database `db_owner`.\n")
-    
-    owner_pass = getpass.getpass("Enter the MySQL 'db_owner' password: ")
-
-    try:
-        conn = pymysql.connect(
-            host='127.0.0.1',
-            user='db_owner',
-            password=owner_pass,
-            database='food_db'
-        )
-        cursor = conn.cursor()
-    except Exception as e:
-        print(f"❌ Connection failed: {e}")
-        return
-
-    # Assuming we check common known numerical columns to shrink the DB footprint
-    columns_to_inspect = ["quantity", "created_t", "last_modified_t"]
-
-    for col in columns_to_inspect:
-        print(f"\nAnalyzing column: `{col}`")
-        
-        try:
-            # Check if column exists by picking 5000 non nulls
-            query = f"SELECT `{col}` FROM products WHERE `{col}` IS NOT NULL AND `{col}` != '' LIMIT 5000"
-            df = pd.read_sql(query, conn)
-        except Exception as e:
-            print(f" ⚠️ Could not read column `{col}`: {e}")
-            continue
-            
-        if df.empty:
-            print(f" ⏭️ Column `{col}` is entirely null/empty. Keeping as TEXT.")
-            continue
-            
-        series = df[col].astype(str).str.strip()
-        
-        # INTEGER CHECK
-        if series.str.match(r'^-?\d+$').all():
-            print(f" ⚙️ Status: ALL INTS matched. Converting `{col}` to BIGINT.")
-            try:
-                cursor.execute(f"UPDATE products SET `{col}` = NULL WHERE `{col}` = '';")
-                cursor.execute(f"ALTER TABLE products MODIFY COLUMN `{col}` BIGINT;")
-                conn.commit()
-                print(" ✅ Success")
-            except Exception as e:
-                print(f" ❌ Failed to alter table: {e}")
-            continue
-            
-        # FLOAT CHECK
-        test_float = series.str.replace(',', '.')
-        if test_float.str.match(r'^-?\d*\.\d+$').any() and test_float.str.match(r'^-?\d*\.?\d+$').all():
-            print(f" ⚙️ Status: FLOATS detected. Standardizing and converting `{col}` to DOUBLE...")
-            try:
-                cursor.execute(f"UPDATE products SET `{col}` = NULL WHERE `{col}` = '';")
-                cursor.execute(f"UPDATE products SET `{col}` = REPLACE(`{col}`, ',', '.') WHERE `{col}` LIKE '%,%';")
-                cursor.execute(f"ALTER TABLE products MODIFY COLUMN `{col}` DOUBLE;")
-                conn.commit()
-                print(" ✅ Success")
-            except Exception as e:
-                print(f" ❌ Failed to alter table: {e}")
-            continue
-
-        print(f" ⏭️ Keeping `{col}` as TEXT.")
-
-    cursor.close()
-    conn.close()
-    print("\n🎉 Datatype conversion complete!")
-
-if __name__ == '__main__':
-    detect_and_convert_types()

+ 0 - 23
legacy_scripts/fetch_tasks.py

@@ -1,23 +0,0 @@
-import requests
-import urllib3
-urllib3.disable_warnings()
-
-auth = requests.post(
-    'https://192.168.130.161/taiga/api/v1/auth', 
-    json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, 
-    verify=False
-).json()
-
-headers = {'Authorization': f'Bearer {auth["auth_token"]}'}
-
-for pid in [18, 21]:
-    try:
-        tasks = requests.get(f'https://192.168.130.161/taiga/api/v1/tasks?project={pid}', headers=headers, verify=False).json()
-        if isinstance(tasks, list):
-            for t in tasks:
-                if str(t.get('ref')) in ['15', '16', '17', '18', '20', '21', '22']:
-                    status_id = t.get('status')
-                    status_info = requests.get(f'https://192.168.130.161/taiga/api/v1/task-statuses/{status_id}', headers=headers, verify=False).json() if status_id else {}
-                    print(f'Ref: TG-{t.get("ref")}, Status: {status_info.get("name", "Unknown")}, Subject: {t.get("subject")}')
-    except Exception as e:
-        print(f"Error fetching project {pid}: {e}")

+ 0 - 45
legacy_scripts/gen_presentation.py

@@ -1,45 +0,0 @@
-import markdown
-import os
-
-files_to_merge = ['AI_History/Client_Presentation.md', 'AI_History/status_report.md', 'AI_History/Retrospective.md']
-merged_md = ''
-for fName in files_to_merge:
-    if os.path.exists(fName):
-        with open(fName, 'r', encoding='utf-8') as f:
-            merged_md += f.read() + '\n\n---\n\n'
-
-html_content = markdown.markdown(merged_md, extensions=['tables'])
-
-html_template = f'''
-<!DOCTYPE html>
-<html>
-<head>
-    <meta charset="utf-8">
-    <title>Customer Presentation</title>
-    <style>
-        body {{ font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif; line-height: 1.6; color: #333; max-width: 900px; margin: 0 auto; padding: 2rem; }}
-        h1 {{ color: #2c3e50; border-bottom: 2px solid #3498db; padding-bottom: 10px; }}
-        h2 {{ color: #2980b9; margin-top: 2rem; }}
-        h3 {{ color: #16a085; }}
-        table {{ border-collapse: collapse; width: 100%; margin-bottom: 2rem; }}
-        th, td {{ border: 1px solid #ddd; padding: 12px; text-align: left; }}
-        th {{ background-color: #f2f2f2; color: #333; }}
-        @media print {{
-            body {{ padding: 0; max-width: 100%; }}
-            hr {{ page-break-after: always; border: 0; }}
-        }}
-    </style>
-</head>
-<body>
-    <div style="text-align:center; margin-bottom: 3rem;">
-        <h1 style="border: none;">Clinical Food AI Platform</h1>
-        <p><strong>Master Deliverable Overview</strong></p>
-    </div>
-    {html_content}
-</body>
-</html>
-'''
-
-with open('Final_Presentation.html', 'w', encoding='utf-8') as f:
-    f.write(html_template)
-print('Generated HTML!')

+ 0 - 41
legacy_scripts/reset_pwd.py

@@ -1,41 +0,0 @@
-import bcrypt
-import pymysql
-import sys
-
-import myloginpath
-
-def get_db_connection():
-    conf = myloginpath.parse('app_auth')
-    return pymysql.connect(
-        host=conf.get('host', '127.0.0.1'),
-        user=conf.get('user', 'db_app_auth'),
-        password=conf.get('password'),
-        database='food_db',
-        cursorclass=pymysql.cursors.DictCursor,
-        autocommit=True
-    )
-
-def reset_pwd(username, plain_password):
-    conn = get_db_connection()
-    if not conn:
-        print("Failed DB connection!")
-        sys.exit(1)
-        
-    hashed = bcrypt.hashpw(plain_password.encode('utf-8'), bcrypt.gensalt()).decode('utf-8')
-    with conn.cursor() as cursor:
-        rows = cursor.execute("UPDATE users SET password_hash = %s WHERE username = %s", (hashed, username))
-        if rows > 0:
-            print(f"✅ Successfully updated password for {username}!")
-        else:
-            print(f"❌ User '{username}' not found in database!")
-    conn.close()
-
-if __name__ == "__main__":
-    if len(sys.argv) < 3:
-        username = input("Enter Username: ")
-        plain_password = input("Enter New Password: ")
-    else:
-        username = sys.argv[1]
-        plain_password = sys.argv[2]
-        
-    reset_pwd(username, plain_password)

+ 0 - 24
legacy_scripts/taiga_checker.py

@@ -1,24 +0,0 @@
-import requests, urllib3
-urllib3.disable_warnings()
-
-auth = requests.post(
-    'https://192.168.130.161/taiga/api/v1/auth', 
-    json={'type': 'normal', 'username': 'lanfr1904@outlook.com', 'password': 'BTSai123'}, 
-    verify=False
-).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}'}
-
-proj_id = 21
-
-print("=== User Stories missing Tasks ===")
-us_list = requests.get(f'https://192.168.130.161/taiga/api/v1/userstories?project={proj_id}', headers=headers, verify=False).json()
-for us in us_list:
-    tasks = requests.get(f'https://192.168.130.161/taiga/api/v1/tasks?user_story={us["id"]}', headers=headers, verify=False).json()
-    if len(tasks) == 0:
-        print(f"US #{us['ref']}: {us['subject']}")
-
-print("\n=== User Stories missing Points ===")
-for us in us_list:
-    if us.get('total_points') == 0 or us.get('total_points') is None:
-        print(f"US #{us['ref']}: {us['subject']} (Points: {us.get('total_points')})")
-

+ 0 - 12
legacy_scripts/taiga_closeout.py

@@ -1,12 +0,0 @@
-import requests, urllib3
-urllib3.disable_warnings()
-auth = requests.post('https://192.168.130.161/taiga/api/v1/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}'}
-
-epic_statuses = requests.get('https://192.168.130.161/taiga/api/v1/epic-statuses?project=21', headers=headers, verify=False).json()
-epic_closed_status = next((s['id'] for s in epic_statuses if s['is_closed']), None)
-
-epic = requests.get('https://192.168.130.161/taiga/api/v1/epics/by_ref?ref=28&project=21', headers=headers, verify=False).json()
-if 'id' in epic:
-    resp = requests.patch(f'https://192.168.130.161/taiga/api/v1/epics/{epic["id"]}', headers=headers, json={'status': epic_closed_status, 'version': epic['version']}, verify=False)
-    print(f'Epic TG-{epic["ref"]} Closing Status: {resp.status_code}')

+ 0 - 31
legacy_scripts/taiga_feed.py

@@ -1,31 +0,0 @@
-import requests, urllib3
-urllib3.disable_warnings()
-
-auth = requests.post(
-    'https://192.168.130.161/taiga/api/v1/auth', 
-    json={'type': 'normal', 'username': 'lanfr1904@outlook.com', 'password': 'BTSai123'}, 
-    verify=False
-).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}'}
-
-proj_id = 21
-
-pts = requests.get(f'https://192.168.130.161/taiga/api/v1/points?project={proj_id}', headers=headers, verify=False).json()
-pt_map = {p['value']: p['id'] for p in pts}
-five_pt_id = pt_map.get(5)
-roles = requests.get(f'https://192.168.130.161/taiga/api/v1/roles?project={proj_id}', headers=headers, verify=False).json()
-role_id = roles[0]['id']
-
-us_list = requests.get(f'https://192.168.130.161/taiga/api/v1/userstories?project={proj_id}', headers=headers, verify=False).json()
-for us in us_list:
-    if us.get('total_points') == 0 or us.get('total_points') is None:
-        points_payload = us.get('points', {})
-        points_payload[str(role_id)] = five_pt_id
-        
-        resp = requests.patch(
-            f'https://192.168.130.161/taiga/api/v1/userstories/{us["id"]}', 
-            headers=headers, 
-            json={'points': points_payload, 'version': us['version']}, 
-            verify=False
-        )
-        print(f"Patched US {us['ref']} to 5 Points! Status: {resp.status_code}")

+ 0 - 18
legacy_scripts/taiga_sprint4.py

@@ -1,18 +0,0 @@
-import requests, urllib3
-urllib3.disable_warnings()
-
-auth = requests.post(
-    'https://192.168.130.161/taiga/api/v1/auth', 
-    json={'type': 'normal', 'username': 'lanfr1904@outlook.com', 'password': 'BTSai123'}, 
-    verify=False
-).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}'}
-proj_id = 21
-
-us_payload = {"project": proj_id, "subject": "Sprint 4: Operations & Migrations", "total_points": 5}
-new_us = requests.post('https://192.168.130.161/taiga/api/v1/userstories', headers=headers, json=us_payload, verify=False).json()
-
-tasks = ["Create unified PDF presentation for review", "Execute Alembic Database Migration scripting", "Sanitize Ollama Mistral LLM endpoints on .170", "Perform Green Recommendation Engine Demo"]
-for t in tasks:
-    requests.post('https://192.168.130.161/taiga/api/v1/tasks', headers=headers, json={"project": proj_id, "user_story": new_us['id'], "subject": t}, verify=False)
-print("Sprint 4 Filled on Taiga!")

+ 0 - 40
legacy_scripts/taiga_sprint4_deploy.py

@@ -1,40 +0,0 @@
-import requests, urllib3
-urllib3.disable_warnings()
-
-auth = requests.post(
-    'https://192.168.130.161/taiga/api/v1/auth', 
-    json={'type': 'normal', 'username': 'lanfr1904@outlook.com', 'password': 'BTSai123'}, 
-    verify=False
-).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}'}
-proj_id = 21
-sprint4_id = 71
-
-us_list = requests.get(f'https://192.168.130.161/taiga/api/v1/userstories?project={proj_id}', headers=headers, verify=False).json()
-
-tasks = [
-    "Refactor Cryptography Bug - Replace dynamic salting loop with bcrypt.checkpw",
-    "Implement Horizontal Table Partitioning to bypass MySQL 65KB InnoDB limit",
-    "Construct dynamic UI multiselect for mapping 200 CSV columns seamlessly",
-    "Bind Pandas dataframes tightly to Memory logic preventing UI crashes",
-    "Overwrite LLM system prompts strictly for native Markdown gram output",
-    "Configure native mail throttle limits to block .pt.lu bounce delays"
-]
-
-target_us = None
-for us in us_list:
-    if "Sprint 4" in us['subject'] or us['milestone'] is None:
-        target_us = us
-        # Patch US to Sprint 4 milestone
-        res = requests.patch(f"https://192.168.130.161/taiga/api/v1/userstories/{us['id']}", 
-            headers=headers, 
-            json={"milestone": sprint4_id, "version": us['version']}, 
-            verify=False)
-        print(f"Mapped US {us['id']} ({us['subject']}) into Sprint 4 Milestone!")
-        
-        for t in tasks:
-            requests.post('https://192.168.130.161/taiga/api/v1/tasks', headers=headers, json={"project": proj_id, "user_story": us['id'], "subject": t}, verify=False)
-        print(f"Successfully appended granular deployment tasks into US: {us['id']}")
-        
-if not target_us:
-    print("No open unassigned User Stories found to append Sprint 4 data.")

+ 0 - 17
legacy_scripts/test_mail.py

@@ -1,17 +0,0 @@
-import smtplib
-from email.message import EmailMessage
-
-try:
-    msg = EmailMessage()
-    msg.set_content("This is an automated local environment test from the Clinical Food AI platform. If you are receiving this, your secure loopback postfix configuration is verified and functioning flawlessly over Port 25!")
-    msg['Subject'] = "Local Food AI: Internal Subnet Verification"
-    msg['From'] = "system@localfoodaimaster.com"
-    msg['To'] = '"Mr Lange François" <flange@pt.lu>'
-
-    # Strict loopback port explicitly targeting postfix configurations to bypass 0.0.0.0 leaks
-    s = smtplib.SMTP('localhost', 25)
-    s.send_message(msg)
-    s.quit()
-    print("✅ Email dispatched perfectly via local postfix socket!")
-except Exception as e:
-    print(f"❌ Failed to reach or broadcast via local SMTP Postfix. Error: {e}")

+ 0 - 21
legacy_scripts/test_taiga.py

@@ -1,21 +0,0 @@
-import requests
-import urllib3
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-
-auth = requests.post(
-    'https://192.168.130.161/taiga/api/v1/auth', 
-    json={'type': 'normal', 'username': 'lanfr1904@outlook.com', 'password': 'BTSai123'}, 
-    verify=False
-).json()
-
-headers = {'Authorization': f'Bearer {auth["auth_token"]}'}
-proj_id = 21
-
-m_res = requests.get(f'https://192.168.130.161/taiga/api/v1/milestones?project={proj_id}', headers=headers, verify=False).json()
-print('Milestones:', [(m['name'], m['id']) for m in m_res])
-
-w_res = requests.get(f'https://192.168.130.161/taiga/api/v1/wiki?project={proj_id}', headers=headers, verify=False)
-if w_res.status_code == 200:
-    print('Wikis:', [(w['slug'], w['id']) for w in w_res.json()])
-else:
-    print('Wiki error:', w_res.text)

+ 0 - 125
project_text.txt

@@ -1,125 +0,0 @@
-Vision statement 
-A local food AI that provides full nutritional value information on any food and can 
-generate complete menu proposals based on the user's specification. 
-The user can: 
-* Create an account and log in. 
-* Get complete nutritional value information on any food, including macro nutrients, 
-minerals, vitamins, amino acids etc. 
-* Get the full nutritional value information for a given food combination, i.e. the user can 
-enter the quantities of several foods and get a full nutritional value overview. 
-* Search for specific nutrient content and get a sortable list of all foods that contain 
-this/these nutrient(s). 
-* Store food combinations in named and editable lists. 
-* Get menu proposals based on nutritional value and other food constraint input, e.g. 
-allergies. 
-* Freely chat about anything related to nutrition and get competent answers. 
-* The AI must have a local web search tool that it will use to anonymously gather any 
-info that it does not have in its local database. 
-* No user data leaves the server. 
-* The whole project must be in a public listed https://git.btshub.lu repo 
-named LocalFoodAI_<your IAM> that is updated in real-time. Make sure to prevent 
-confidential data from being uploaded. Add your teacher (id evegi144) as a collaborator. 
-Anyone should be able to clone the repo and get his own local food AI running with 
-ease. 
-* The application must be designed to run entirely on your provided Ubuntu 24.04 VM (8 
-vCPUs, 30 GB RAM, no dedicated GPU). You must evaluate and select appropriate 
-lightweight, quantized local LLMs (via Ollama) and databases that fit securely within 
-this hardware limit. 
-Roles 
-• You are the product owner and tech lead, whilst Antigravity is your development 
-team. This means that you interview the customer to extract the vision, and then 
-write the Taiga backlog. 
-• Your teacher is the customer and Scrum master. Add your teacher 
-(gilles.everling@education.lu) as a stakeholder to your Taiga project. 
-Workflow 
-• You follow the Scrum manual. 
-• You agree on the meaning of "Done" with your teacher.  
-• The sprint time box is one week. 
-• All meetings are documented in the Wiki. 
-• In the Daily Scrum meeting you answer and document in the Wiki the three 
-questions: 
-o What has been accomplished since the last meeting? 
-o What will be done before the next meeting? 
-o What obstacles are in the way? 
-• Thursday: 
-o Sprint Review meeting that lasts at most 15 minutes. 
-o Sprint Retrospective meeting that lasts at most 15 minutes. 
-o Sprint Planning meeting that lasts at most 30 minutes. You estimate the 
-capacity of work, produce the Sprint Backlog and Sprint Goal. 
-o Daily Scrum meeting that lasts at most 5 minutes.  
-• Friday: 
-o Daily Scrum meeting that lasts at most 5 minutes. 
-• You perform continuous product backlog grooming. 
-• During Sprint Planning, you create User Stories in Taiga and break them down 
-into smaller technical Tasks. To execute the work, you feed the overarching User 
-Story to the agent for context, but you assign it the specific Task for execution. 
-• Before the agent writes the code, it generates an Implementation Plan (an 
-Antigravity Artifact). You must set your Antigravity Review Policy to "Request 
-Review" (rather than "Always Proceed"). You have to read, evaluate, and approve 
-the AI's Python and database logic before it is committed, ensuring you actually 
-understand the code being generated. 
-• You can use the Gemini CLI to quickly test database queries or prompt logic in 
-the terminal before having the Antigravity agent implement it into the main 
-codebase. 
-• All artifacts generated by Antigravity, e.g. task lists, implementation plans and 
-browser recordings, must be attached to the corresponding task and/or user 
-story in Taiga. 
-• In a real-world Agile environment, code commits must be traceable back to the 
-business requirements (User Stories). You must instruct your Antigravity 
-agent(s): "When you commit this code, use the commit message: 'TG-<Taiga 
-task id>: <Taiga task description>' . Do not attempt to push". If the AI reports a 
-'sandbox error' or 'permission denied' when trying to push, it is not a 
-technical problem - it is a safety feature. Perform the push manually in the 
-terminal, then tell the AI to proceed with the next step. 
-• Daily Workflow with Antigravity 
-You are the Tech Lead; Antigravity is your Development Team. You do not write 
-code; you direct the Antigravity agent. 
-o Planning (Thursday): 
-• Create User Stories with clear story title and acceptance criteria in 
-Taiga for your planned work. To do this you can provide Antigravity 
-with the project vision statement and let it generate all the scrum 
-artifacts for you and it can even feed them directly into Taiga via 
-the Taiga API. 
-• In Taiga, break the Story into specific tasks for Antigravity and note 
-the task ids (e.g. #3). 
-o Development (Thursday/Friday): 
-• Orchestrate: Feed your Taiga Task to Antigravity’s Agent Manager. 
-Here's an example prompt: "Work on Taiga Task #1. Write a Python 
-script to initialize the SQLite database. Once complete, stage, 
-commit, and push the files with the commit message: 'TG-1: 
-Initialize SQLite DB'." 
-Pro-Tip: Create a PROJECT_CONTEXT.md file in the root of your 
-repository that outlines your tech stack (Ubuntu, Ollama, SearXNG, 
-Database, ...) and the strict "No external APIs" rule. Instruct 
-Antigravity to "Read PROJECT_CONTEXT.md before starting" so it 
-never hallucinates cloud-based solutions. 
-• Review: Set Review Policy to "Request Review." Evaluate the AI's 
-Python/Database logic before approving. Before approving the 
-commit, verify that: 
-• The agent included the TG-<ID> prefix in the commit 
-message. 
-• The code is strictly local (no external API calls). 
-• The logic is correct based on your technical research. 
-• Attach: Download the AI's Implementation Plan and attach it to the 
-Taiga ticket. 
-o The Golden Rule of committing: 
-Every time you push code, make sure Antigravity has linked it to a Taiga 
-Task ID in the commit message so the webhook works. If that's the case, 
-you only need to run git push if not: 
-git add . 
-git commit -m "TG-<ID>: [Your short description]" 
-git push 
-(Example: git commit -m "TG-1: Setup SQLite DB structure" ) 
-o Verification 
-• Taiga: Open your User Story (e.g., #1) and check the History/Git 
-tab. You should see your commit message appear there 
-automatically. 
-• Gogs: Check your Webhook "Recent Deliveries." You should see 
-green checkmarks (204 No Content). 
-Troubleshooting: 
-• If you do not tell the AI to use the TG-<ID> format in the 
-commit message, your Taiga board will not update. 
-• If you see "Referenced element doesn't exist," verify your 
-commit message has the correct TG-<ID> format matching 
-an existing Story ID in Taiga. 
- 

+ 0 - 1
reset_zabbix_db.sh

@@ -1 +0,0 @@
-mysql -e "DROP DATABASE zabbix; CREATE DATABASE zabbix character set utf8mb4 collate utf8mb4_bin; GRANT ALL PRIVILEGES ON zabbix.* TO 'zabbix'@'%'; FLUSH PRIVILEGES;"

+ 0 - 7
retro_planning_text.txt

@@ -1,7 +0,0 @@
-RETRO PLANNING
-Retro planning / backward planningRetro planning, also known as backward planning or reverse planning.As the name suggests, this type of planning is built in reverse chronological order from a fixed deadline. This is especially useful when the delivery date is set from the start and cannot be moved!
-Retro planning / backward planning
-The reverse planning often takes the shape of a Gantt diagram. This chart is used to get a visual representation of the different steps and phases of the project, as well as the start and end dates of each task.
-Advantages■Avoid missing deadlines: planning the development process from the deadline is the best way to plan and act in accordance with the time frame given to the project. ■Visibility: Planning in reverse gives you more visibility on your progress and the time left, thus making it easier for you to avoid delays and overruns.■Ensure the feasibility of the project: Backward planning allows you to gauge whether or not the objectives are realistically achievable in a given time. 
-Advantages■Adapt the duration of specific tasks according to the remaining time: you can anticipate the leeway available to spend more time on high value-added tasks. You could also identify the deliverables on which you could afford to spend less time and effort. Overall, planning tasks becomes more granular and accurate.■Manage resources more effectively: this goes hand-in-hand with the precision gain on project planning. By assessing the time allowed for each task, you can also infer the number of resources required to complete the tasks on time.
-Build your retro planningStart from the deadline, by positioning the last task to complete on the schedule on the D-Day, then the one before, and so forth (or back).In some cases, you may not have enough room left to plan the first few tasks. This means the ideal start date is already behind you: you should thus make a decision to hit the deadline nonetheless.•Review your priorities or the scope of the project,•Adjust the time frame and delays,•Add resources if possible.Whenever possible, try to include some leeway into your estimates so you can stay on track even when problems arise along the project life cycle!

+ 0 - 41
scratch/audit_taiga.py

@@ -1,41 +0,0 @@
-import requests
-import urllib3
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-base_url = 'https://192.168.130.161/taiga/api/v1'
-
-def audit():
-    try:
-        auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-        headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-        proj_id = 21
-
-        milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
-        sprint11 = next((m for m in milestones if m['name'] == 'Sprint 11'), None)
-        
-        if not sprint11:
-            print("Sprint 11 not found.")
-            return
-
-        sprint_id = sprint11['id']
-        print(f"--- SPRINT 11 AUDIT ---")
-        
-        us_statuses = requests.get(f'{base_url}/userstory-statuses?project={proj_id}', headers=headers, verify=False).json()
-        status_map = {s['id']: s['name'] for s in us_statuses}
-        
-        us_list = requests.get(f'{base_url}/userstories?project={proj_id}&milestone={sprint_id}', headers=headers, verify=False).json()
-        
-        all_closed = True
-        for us in us_list:
-            status_name = status_map.get(us['status'], 'Unknown')
-            print(f"[US] {us['subject']} - Status: {status_name}")
-            if status_name.lower() != 'closed':
-                all_closed = False
-                
-        print(f"Sprint fully closed? {'YES' if all_closed else 'NO'}")
-
-    except Exception as e:
-        print(f"Failed to audit Taiga: {e}")
-
-if __name__ == "__main__":
-    audit()

+ 0 - 44
scratch/close_sprint12.py

@@ -1,44 +0,0 @@
-import requests
-import urllib3
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-
-base_url = 'https://192.168.130.161/taiga/api/v1'
-auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-proj_id = 21
-
-print("Fetching Sprints...")
-milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
-sprint12 = next((m for m in milestones if m['name'] == 'Sprint 12'), None)
-
-if not sprint12:
-    print("Sprint 12 not found! Exiting.")
-    exit(1)
-    
-sprint_id = sprint12['id']
-
-# Get Closed status IDs
-us_statuses = requests.get(f'{base_url}/userstory-statuses?project={proj_id}', headers=headers, verify=False).json()
-task_statuses = requests.get(f'{base_url}/task-statuses?project={proj_id}', headers=headers, verify=False).json()
-
-closed_us_status = next((s['id'] for s in us_statuses if s['is_closed']), None)
-closed_task_status = next((s['id'] for s in task_statuses if s['is_closed']), None)
-
-# Update User Stories
-us_list = requests.get(f'{base_url}/userstories?project={proj_id}&milestone={sprint_id}', headers=headers, verify=False).json()
-for us in us_list:
-    if us['status'] != closed_us_status:
-        payload = {"status": closed_us_status, "version": us['version']}
-        requests.patch(f'{base_url}/userstories/{us["id"]}', json=payload, headers=headers, verify=False)
-        print(f"Closed User Story: {us['subject']}")
-
-# Update Tasks
-tasks = requests.get(f'{base_url}/tasks?project={proj_id}&milestone={sprint_id}', headers=headers, verify=False).json()
-for task in tasks:
-    if task['status'] != closed_task_status:
-        payload = {"status": closed_task_status, "version": task['version']}
-        requests.patch(f'{base_url}/tasks/{task["id"]}', json=payload, headers=headers, verify=False)
-        print(f"Closed Task: {task['subject']}")
-
-print("Sprint 12 successfully closed!")

+ 0 - 17
scratch/read_pdfs.py

@@ -1,17 +0,0 @@
-import os
-from pypdf import PdfReader
-
-def extract_pdf(pdf_path, txt_path):
-    try:
-        reader = PdfReader(pdf_path)
-        text = ""
-        for page in reader.pages:
-            text += page.extract_text() + "\n"
-        with open(txt_path, "w", encoding="utf-8") as f:
-            f.write(text)
-        print(f"Extracted {pdf_path} to {txt_path}")
-    except Exception as e:
-        print(f"Failed to read {pdf_path}: {e}")
-
-extract_pdf(r"c:\Users\lanfr144\Documents\DOPRO1\Antigravity\Food\Project.pdf", "project_text.txt")
-extract_pdf(r"c:\Users\lanfr144\Documents\DOPRO1\Antigravity\Food\Retro Planning.pdf", "retro_planning_text.txt")

+ 0 - 55
scratch/setup_sprint12_taiga.py

@@ -1,55 +0,0 @@
-import requests
-import urllib3
-from datetime import datetime, timedelta
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-base_url = 'https://192.168.130.161/taiga/api/v1'
-
-auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-proj_id = 21
-
-print("Fetching Sprints...")
-milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
-sprint12 = next((m for m in milestones if m['name'] == 'Sprint 12'), None)
-
-if not sprint12:
-    print("Sprint 12 not found, creating it...")
-    payload = {
-        "project": proj_id,
-        "name": "Sprint 12",
-        "estimated_start": datetime.now().strftime('%Y-%m-%d'),
-        "estimated_finish": (datetime.now() + timedelta(days=7)).strftime('%Y-%m-%d')
-    }
-    sprint12 = requests.post(f'{base_url}/milestones', json=payload, headers=headers, verify=False).json()
-    
-sprint_id = sprint12['id']
-print(f"Sprint 12 ID: {sprint_id}")
-
-stories = [
-    {"subject": "AI Meal Plan PDF Generation", "description": "Implement FPDF2 script to intercept the Ollama Markdown output, parse the tables, and provide a downloadable PDF artifact for dietitians to hand out."},
-    {"subject": "Health Profile Input Constraints", "description": "Refactor the EAV profile tab to use specific drop-down menus instead of text-inputs to strictly enforce the clinical backend flags (Kosher, Halal, Diabetes)."}
-]
-
-for s in stories:
-    payload = {
-        "project": proj_id,
-        "subject": s["subject"],
-        "description": s["description"],
-        "milestone": sprint_id
-    }
-    res = requests.post(f'{base_url}/userstories', json=payload, headers=headers, verify=False)
-    if res.status_code == 201:
-        us = res.json()
-        print(f"Created US: {us['subject']}")
-        
-        # Create a task for it
-        t_payload = {
-            "project": proj_id,
-            "subject": f"Execute: {us['subject']}",
-            "user_story": us['id'],
-            "milestone": sprint_id
-        }
-        requests.post(f'{base_url}/tasks', json=t_payload, headers=headers, verify=False)
-
-print("Sprint 12 populated!")

+ 0 - 56
scratch/setup_sprint13_taiga.py

@@ -1,56 +0,0 @@
-import requests
-import urllib3
-from datetime import datetime, timedelta
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-base_url = 'https://192.168.130.161/taiga/api/v1'
-
-auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-proj_id = 21
-
-print("Fetching Sprints...")
-milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
-sprint13 = next((m for m in milestones if m['name'] == 'Sprint 13'), None)
-
-if not sprint13:
-    print("Sprint 13 not found, creating it...")
-    payload = {
-        "project": proj_id,
-        "name": "Sprint 13",
-        "estimated_start": datetime.now().strftime('%Y-%m-%d'),
-        "estimated_finish": (datetime.now() + timedelta(days=7)).strftime('%Y-%m-%d')
-    }
-    sprint13 = requests.post(f'{base_url}/milestones', json=payload, headers=headers, verify=False).json()
-    
-sprint_id = sprint13['id']
-print(f"Sprint 13 ID: {sprint_id}")
-
-stories = [
-    {"subject": "Deploy Local SearXNG Web Search Tool", "description": "The AI must have a local web search tool that it will use to anonymously gather any info that it does not have in its local database."}
-]
-
-for s in stories:
-    payload = {
-        "project": proj_id,
-        "subject": s["subject"],
-        "description": s["description"],
-        "milestone": sprint_id
-    }
-    res = requests.post(f'{base_url}/userstories', json=payload, headers=headers, verify=False)
-    if res.status_code == 201:
-        us = res.json()
-        print(f"Created US: {us['subject']}")
-        
-        # Create tasks
-        tasks = ["Inject SearXNG container into docker-compose.yml", "Implement Web Search Heuristic fallback in AI Chat", "Integrate SearXNG API payload parsing with Ollama"]
-        for t in tasks:
-            t_payload = {
-                "project": proj_id,
-                "subject": t,
-                "user_story": us['id'],
-                "milestone": sprint_id
-            }
-            requests.post(f'{base_url}/tasks', json=t_payload, headers=headers, verify=False)
-
-print("Sprint 13 populated!")

+ 0 - 39
scratch/test_search.py

@@ -1,39 +0,0 @@
-import os
-import pymysql
-
-def get_db_connection():
-    db_host = os.environ.get('DB_HOST', '127.0.0.1')
-    db_user = os.environ.get('DB_READER_USER', 'db_reader')
-    db_pass = os.environ.get('DB_READER_PASS', 'reader_pass')
-    return pymysql.connect(host=db_host, user=db_user, password=db_pass, database='food_db', cursorclass=pymysql.cursors.DictCursor)
-
-conn = get_db_connection()
-with conn.cursor() as cursor:
-    cursor.execute("SELECT COUNT(*) as c FROM food_db.products_core")
-    print("Total Core Rows:", cursor.fetchone()['c'])
-    
-    sq = "white rice"
-    bool_search = " ".join([f"+{w}" for w in sq.split()])
-    print("Search query:", bool_search)
-    
-    sql = """
-    SELECT code, product_name 
-    FROM food_db.products_core
-    WHERE MATCH(product_name, ingredients_text) AGAINST(%s IN BOOLEAN MODE)
-    LIMIT 5
-    """
-    cursor.execute(sql, (bool_search,))
-    res = cursor.fetchall()
-    print("Result (MATCH product_name, ingredients_text):", len(res))
-    
-    # Check if MATCH(product_name) works instead
-    sql = """
-    SELECT code, product_name 
-    FROM food_db.products_core
-    WHERE product_name LIKE %s
-    LIMIT 5
-    """
-    cursor.execute(sql, (f"%{sq}%",))
-    res2 = cursor.fetchall()
-    print("Result (LIKE):", len(res2))
-conn.close()

+ 0 - 26
setup_mail_forwarding.sh

@@ -1,26 +0,0 @@
-#!/bin/bash
-# run this as root/sudo on the Ubuntu VM
-
-echo "Setting up centralized mail forwarding to lanfr144@gmail.com..."
-
-# 1. Update the skeleton directory so all NEW users created automatically forward mail
-echo "lanfr144@gmail.com" | sudo tee /etc/skel/.forward
-sudo chmod 644 /etc/skel/.forward
-
-# 2. Add forwarding to all dynamically created home directories
-for user_dir in /home/*; do
-  if [ -d "$user_dir" ]; then
-    user_name=$(basename "$user_dir")
-    echo "lanfr144@gmail.com" | sudo tee "$user_dir/.forward"
-    sudo chown "$user_name":"$user_name" "$user_dir/.forward"
-    sudo chmod 644 "$user_dir/.forward"
-    echo "Configured for user: $user_name"
-  fi
-done
-
-# 3. Add forwarding for root manually
-echo "lanfr144@gmail.com" | sudo tee /root/.forward
-sudo chmod 644 /root/.forward
-echo "Configured for root."
-
-echo "✅ All system mail will now forward to lanfr144@gmail.com"

+ 0 - 83
setup_nginx_zabbix.py

@@ -1,83 +0,0 @@
-import requests
-import json
-import os
-
-ZABBIX_API_URL = "http://zabbix-web:8080/api_jsonrpc.php"
-ZABBIX_USER = "Admin"
-ZABBIX_PASSWORD = "zabbix"
-
-def authenticate():
-    payload = {"jsonrpc": "2.0", "method": "user.login", "params": {"username": ZABBIX_USER, "password": ZABBIX_PASSWORD}, "id": 1}
-    try:
-        response = requests.post(ZABBIX_API_URL, json=payload).json()
-        return response.get('result')
-    except Exception as e:
-        print(f"Error connecting to Zabbix API: {e}")
-        return None
-
-def configure_nginx_web_scenario(auth_token):
-    # Get Zabbix server host ID
-    host_search = {
-        "jsonrpc": "2.0",
-        "method": "host.get",
-        "params": {
-            "filter": {"host": ["Zabbix server"]}
-        },
-        "id": 2,
-        "auth": auth_token
-    }
-    hosts = requests.post(ZABBIX_API_URL, json=host_search).json().get('result', [])
-    if not hosts:
-        print("Could not find Zabbix server host.")
-        return
-    host_id = hosts[0]['hostid']
-
-    print("Checking if Nginx Web Scenario already exists...")
-    scenario_search = {
-        "jsonrpc": "2.0",
-        "method": "httptest.get",
-        "params": {
-            "filter": {"name": ["Nginx Streamlit Proxy Check"]}
-        },
-        "id": 3,
-        "auth": auth_token
-    }
-    scenarios = requests.post(ZABBIX_API_URL, json=scenario_search).json().get('result', [])
-    
-    if scenarios:
-        print("Nginx Web Scenario already exists.")
-        return
-
-    print("Creating Nginx Web Scenario...")
-    create_payload = {
-        "jsonrpc": "2.0",
-        "method": "httptest.create",
-        "params": {
-            "name": "Nginx Streamlit Proxy Check",
-            "hostid": host_id,
-            "delay": "1m",
-            "retries": 3,
-            "steps": [
-                {
-                    "name": "Check Proxy Root",
-                    "url": "http://nginx:80",
-                    "status_codes": "200",
-                    "no": 1
-                }
-            ]
-        },
-        "id": 4,
-        "auth": auth_token
-    }
-    res = requests.post(ZABBIX_API_URL, json=create_payload).json()
-    if 'result' in res:
-        print(f"Successfully created Nginx Web Scenario.")
-    else:
-        print(f"Failed to create Web Scenario: {res}")
-
-if __name__ == "__main__":
-    token = authenticate()
-    if token:
-        configure_nginx_web_scenario(token)
-    else:
-        print("Failed to authenticate to Zabbix API.")

+ 0 - 19
setup_postfix.sh

@@ -1,19 +0,0 @@
-#!/bin/bash
-# run this as root/sudo on the Ubuntu VM to configure SMTP for password resets
-
-echo "🔧 Installing and Configuring Postfix for Local Food AI..."
-
-sudo apt-get update
-# Non-interactive installation of postfix configured for local delivery
-sudo DEBIAN_FRONTEND=noninteractive apt-get install -y postfix
-
-echo "🔒 Disabling external relay to maintain 100% Privacy-First Architecture..."
-# Ensure postfix only listens to localhost for security
-sudo postconf -e "inet_interfaces = loopback-only"
-sudo postconf -e "mydestination = localhost.localdomain, localhost"
-
-echo "🔄 Restarting Mail Service..."
-sudo systemctl restart postfix
-sudo systemctl enable postfix
-
-echo "✅ Success! The 'Forgot Password' feature in the Streamlit UI will now officially route emails to users via the internal Ubuntu backbone!"

+ 0 - 51
setup_searxng.sh

@@ -1,51 +0,0 @@
-#!/bin/bash
-# Local Food AI - SearXNG Setup (Docker)
-
-echo "========================================================="
-echo "🔍 Installing Docker & SearXNG Locally"
-echo "========================================================="
-
-echo "[1/4] Installing Docker..."
-sudo apt update
-sudo apt install -y docker.io
-sudo systemctl enable docker
-sudo systemctl start docker
-
-echo "[2/4] Setting up SearXNG environment structure..."
-sudo mkdir -p /etc/searxng
-
-echo "[3/4] Generating strict local AI settings.yml..."
-# We explicitly enable JSON formats so the python app can scrape the API invisibly
-sudo cat << 'EOF' > /etc/searxng/settings.yml
-use_default_settings: true
-general:
-  debug: false
-  instance_name: "Local Food AI Search"
-search:
-  safe_search: 0
-  autocomplete: ""
-  default_lang: "en"
-  formats:
-    - html
-    - json
-server:
-  port: 8080
-  bind_address: "0.0.0.0"
-  secret_key: "ai_food_search_secret_key"
-  limiter: false
-  image_proxy: true
-EOF
-
-echo "[4/4] Launching official SearXNG Container..."
-# Bind strictly to localhost (127.0.0.1) so no one outside the VM can hit the search engine
-sudo docker stop searxng 2>/dev/null || true
-sudo docker rm searxng 2>/dev/null || true
-
-sudo docker run -d \
-  -p 127.0.0.1:8080:8080 \
-  -v /etc/searxng:/etc/searxng \
-  --name searxng \
-  --restart always \
-  searxng/searxng
-
-echo "✅ SearXNG is now running firmly isolated on http://127.0.0.1:8080!"

+ 0 - 62
setup_sprint10_taiga.py

@@ -1,62 +0,0 @@
-import requests
-import urllib3
-from datetime import datetime, timedelta
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-
-# Authenticate
-base_url = 'https://192.168.130.161/taiga/api/v1'
-auth_url = f'{base_url}/auth'
-auth = requests.post(auth_url, json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-proj_id = 21
-
-print("Fetching Sprints...")
-milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
-sprint10 = next((m for m in milestones if m['name'] == 'Sprint 10'), None)
-
-if not sprint10:
-    print("Sprint 10 not found, creating it...")
-    payload = {
-        "project": proj_id,
-        "name": "Sprint 10",
-        "estimated_start": datetime.now().strftime('%Y-%m-%d'),
-        "estimated_finish": (datetime.now() + timedelta(days=7)).strftime('%Y-%m-%d')
-    }
-    sprint10 = requests.post(f'{base_url}/milestones', json=payload, headers=headers, verify=False).json()
-    
-sprint_id = sprint10['id']
-print(f"Sprint 10 ID: {sprint_id}")
-
-stories = [
-    {"subject": "Fix Llama3 Tool Compatibility", "description": "Upgrade local LLM from llama3 to llama3.1 to support the tool calling API required by Streamlit."},
-    {"subject": "Resolve MySQL Cartesian Product Explosion", "description": "Identify and fix duplicate `code` entries causing massive JOIN explosions and Streamlit duplicate key crashes."},
-    {"subject": "Implement Subquery First Optimization Strategy", "description": "Rewrite application SQL queries to apply MATCH() AGAINST() limits inside a subquery before executing LEFT JOINS, reducing search times to milliseconds."},
-    {"subject": "UI Execution Timers", "description": "Add time.time() measurement wrappers around SQL queries and display execution times to the user to monitor application performance."},
-    {"subject": "Zabbix Microsoft Teams Alert Integration", "description": "Configure Zabbix Webhook Media Types to post server and application alerts directly to a Microsoft Teams channel."}
-]
-
-for s in stories:
-    payload = {
-        "project": proj_id,
-        "subject": s["subject"],
-        "description": s["description"],
-        "milestone": sprint_id
-    }
-    res = requests.post(f'{base_url}/userstories', json=payload, headers=headers, verify=False)
-    if res.status_code == 201:
-        us = res.json()
-        print(f"Created US: {us['subject']}")
-        
-        # Create a task for it
-        t_payload = {
-            "project": proj_id,
-            "subject": f"Execute: {us['subject']}",
-            "user_story": us['id'],
-            "milestone": sprint_id
-        }
-        requests.post(f'{base_url}/tasks', json=t_payload, headers=headers, verify=False)
-    else:
-        print(f"Failed US: {res.text}")
-
-print("Sprint 10 populated!")

+ 0 - 80
setup_sprint11_taiga.py

@@ -1,80 +0,0 @@
-import requests
-import urllib3
-from datetime import datetime, timedelta
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-
-base_url = 'https://192.168.130.161/taiga/api/v1'
-auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-proj_id = 21
-
-print("Fetching Sprints...")
-milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
-sprint11 = next((m for m in milestones if m['name'] == 'Sprint 11'), None)
-
-if not sprint11:
-    print("Sprint 11 not found, creating it...")
-    payload = {
-        "project": proj_id,
-        "name": "Sprint 11",
-        "estimated_start": datetime.now().strftime('%Y-%m-%d'),
-        "estimated_finish": (datetime.now() + timedelta(days=7)).strftime('%Y-%m-%d')
-    }
-    sprint11 = requests.post(f'{base_url}/milestones', json=payload, headers=headers, verify=False).json()
-    
-sprint_id = sprint11['id']
-print(f"Sprint 11 ID: {sprint_id}")
-
-stories = [
-    {"subject": "Pre-Emptive Database Cleaning via Upsert", "description": "Rewrite ingestion logic to use COALESCE and ON DUPLICATE KEY UPDATE to clean massive CSV dataset without requiring GROUP BY in the app layer."},
-    {"subject": "Cascaded Search Logic & Nutrient Selectors", "description": "Update Plate Builder to allow scoped search (Name vs Ingredients) and sort results dynamically by nutrient richness (Iron, Vitamin C)."},
-    {"subject": "Food Scale Conversion Expansion", "description": "Update unit_converter.py to natively support extra-large, large, medium, and small egg sizing and generic food scales."},
-    {"subject": "Self-Detaching NOHUP Ingestion Sync", "description": "Refactor data_sync.sh to proactively request sudo authentication upfront and detach the process to survive SSH drops."}
-]
-
-for s in stories:
-    payload = {
-        "project": proj_id,
-        "subject": s["subject"],
-        "description": s["description"],
-        "milestone": sprint_id
-    }
-    res = requests.post(f'{base_url}/userstories', json=payload, headers=headers, verify=False)
-    if res.status_code == 201:
-        us = res.json()
-        print(f"Created US: {us['subject']}")
-        
-        # Create a task for it
-        t_payload = {
-            "project": proj_id,
-            "subject": f"Execute: {us['subject']}",
-            "user_story": us['id'],
-            "milestone": sprint_id
-        }
-        requests.post(f'{base_url}/tasks', json=t_payload, headers=headers, verify=False)
-
-print("Sprint 11 populated!")
-
-# Get Closed status IDs
-us_statuses = requests.get(f'{base_url}/userstory-statuses?project={proj_id}', headers=headers, verify=False).json()
-task_statuses = requests.get(f'{base_url}/task-statuses?project={proj_id}', headers=headers, verify=False).json()
-
-closed_us_status = next((s['id'] for s in us_statuses if s['is_closed']), None)
-closed_task_status = next((s['id'] for s in task_statuses if s['is_closed']), None)
-
-# Update User Stories
-us_list = requests.get(f'{base_url}/userstories?project={proj_id}&milestone={sprint_id}', headers=headers, verify=False).json()
-for us in us_list:
-    payload = {"status": closed_us_status, "version": us['version']}
-    requests.patch(f'{base_url}/userstories/{us["id"]}', json=payload, headers=headers, verify=False)
-    print(f"Closed User Story: {us['subject']}")
-
-# Update Tasks
-tasks = requests.get(f'{base_url}/tasks?project={proj_id}&milestone={sprint_id}', headers=headers, verify=False).json()
-for task in tasks:
-    payload = {"status": closed_task_status, "version": task['version']}
-    requests.patch(f'{base_url}/tasks/{task["id"]}', json=payload, headers=headers, verify=False)
-    print(f"Closed Task: {task['subject']}")
-
-print("Sprint 11 successfully closed!")

+ 0 - 60
setup_sprint7_taiga.py

@@ -1,60 +0,0 @@
-import requests
-import urllib3
-from datetime import datetime, timedelta
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-
-# Authenticate
-base_url = 'https://192.168.130.161/taiga/api/v1'
-auth_url = f'{base_url}/auth'
-auth = requests.post(auth_url, json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-proj_id = 21
-
-print("Fetching Sprints...")
-milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
-sprint7 = next((m for m in milestones if m['name'] == 'Sprint 7'), None)
-
-if not sprint7:
-    print("Sprint 7 not found, creating it...")
-    payload = {
-        "project": proj_id,
-        "name": "Sprint 7",
-        "estimated_start": datetime.now().strftime('%Y-%m-%d'),
-        "estimated_finish": (datetime.now() + timedelta(days=7)).strftime('%Y-%m-%d')
-    }
-    sprint7 = requests.post(f'{base_url}/milestones', json=payload, headers=headers, verify=False).json()
-    
-sprint_id = sprint7['id']
-print(f"Sprint 7 ID: {sprint_id}")
-
-stories = [
-    {"subject": "Zabbix Server Docker Setup", "description": "Deploy Zabbix server, Zabbix Web, and Zabbix Agent via Docker compose utilizing the host MySQL database."},
-    {"subject": "SNMPv3 Integration", "description": "Implement pysnmp to send AuthPriv SNMPv3 traps to Zabbix."},
-    {"subject": "Application Component Traps", "description": "Inject SNMP traps into Streamlit app.py and background ingestion processes."}
-]
-
-for s in stories:
-    payload = {
-        "project": proj_id,
-        "subject": s["subject"],
-        "description": s["description"],
-        "milestone": sprint_id
-    }
-    res = requests.post(f'{base_url}/userstories', json=payload, headers=headers, verify=False)
-    if res.status_code == 201:
-        us = res.json()
-        print(f"Created US: {us['subject']}")
-        
-        # Create a task for it
-        t_payload = {
-            "project": proj_id,
-            "subject": f"Execute: {us['subject']}",
-            "user_story": us['id'],
-            "milestone": sprint_id
-        }
-        requests.post(f'{base_url}/tasks', json=t_payload, headers=headers, verify=False)
-    else:
-        print(f"Failed US: {res.text}")
-
-print("Sprint 7 populated!")

+ 0 - 64
setup_sprint8_taiga.py

@@ -1,64 +0,0 @@
-import requests
-import urllib3
-from datetime import datetime, timedelta
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-
-# Authenticate
-base_url = 'https://192.168.130.161/taiga/api/v1'
-auth_url = f'{base_url}/auth'
-auth = requests.post(auth_url, json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-proj_id = 21
-
-print("Fetching Sprints...")
-milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
-sprint8 = next((m for m in milestones if m['name'] == 'Sprint 8'), None)
-
-if not sprint8:
-    print("Sprint 8 not found, creating it...")
-    payload = {
-        "project": proj_id,
-        "name": "Sprint 8",
-        "estimated_start": datetime.now().strftime('%Y-%m-%d'),
-        "estimated_finish": (datetime.now() + timedelta(days=7)).strftime('%Y-%m-%d')
-    }
-    sprint8 = requests.post(f'{base_url}/milestones', json=payload, headers=headers, verify=False).json()
-    
-sprint_id = sprint8['id']
-print(f"Sprint 8 ID: {sprint_id}")
-
-stories = [
-    {"subject": "Clinical Explorer Verification Testing", "description": "Generate comprehensive test cases for a Pregnant, Diabetic, and Kidney patient profile across AI Chat, Search, and Meal Planning."},
-    {"subject": "Zabbix Application Monitoring Checks", "description": "Verify Zabbix installation and configure it to monitor both the server and application successfully."},
-    {"subject": "Zabbix Email Integration", "description": "Configure Zabbix default mail media types to direct alerts to the administrator email."},
-    {"subject": "Zabbix Live Alert Testing", "description": "Simulate and trigger an application alert and a server alert to confirm detection by Zabbix."},
-    {"subject": "Server Backup Procedures", "description": "Generate and document a formalized backup procedure for the MySQL databases and Docker infrastructure."},
-    {"subject": "WSL Deployment Playbook", "description": "Create a procedural runbook for deploying the entire Local Food AI project into a new fresh WSL instance."},
-    {"subject": "Agile Scrum Rituals Wiki", "description": "Document the Agile methodologies used in the project including Daily scrums, Sprint Reviews, Retrospectives, and Planning in the Wiki."}
-]
-
-for s in stories:
-    payload = {
-        "project": proj_id,
-        "subject": s["subject"],
-        "description": s["description"],
-        "milestone": sprint_id
-    }
-    res = requests.post(f'{base_url}/userstories', json=payload, headers=headers, verify=False)
-    if res.status_code == 201:
-        us = res.json()
-        print(f"Created US: {us['subject']}")
-        
-        # Create a task for it
-        t_payload = {
-            "project": proj_id,
-            "subject": f"Execute: {us['subject']}",
-            "user_story": us['id'],
-            "milestone": sprint_id
-        }
-        requests.post(f'{base_url}/tasks', json=t_payload, headers=headers, verify=False)
-    else:
-        print(f"Failed US: {res.text}")
-
-print("Sprint 8 populated!")

+ 0 - 13
setup_unix_user.sh

@@ -1,13 +0,0 @@
-#!/bin/bash
-# create unix service user for Food AI
-USERNAME="food_ai"
-PASSWORD="BTSai123"
-# Check if user exists
-if id -u $USERNAME >/dev/null 2>&1; then
-  echo "User $USERNAME already exists"
-else
-  sudo net user $USERNAME $PASSWORD /add
-  # Add to docker-users group (Docker Desktop group on Windows)
-  sudo net localgroup docker-users $USERNAME /add
-  echo "User $USERNAME created and added to docker-users group"
-fi

+ 0 - 65
sync_current_sprint.py

@@ -1,65 +0,0 @@
-import requests
-import urllib3
-from datetime import datetime
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-base_url = 'https://192.168.130.161/taiga/api/v1'
-
-def sync():
-    try:
-        # Authenticate
-        auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-        headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-        proj_id = 21
-
-        # 1. Fetch Milestones
-        milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
-        
-        # We will create Sprint 9 if it doesn't exist
-        sprint9 = next((m for m in milestones if m['name'] == 'Sprint 9'), None)
-        
-        if not sprint9:
-            sprint_start = datetime.now()
-            payload = {
-                "project": proj_id,
-                "name": "Sprint 9",
-                "estimated_start": sprint_start.strftime('%Y-%m-%d'),
-                "estimated_finish": sprint_start.strftime('%Y-%m-%d')
-            }
-            sprint9 = requests.post(f'{base_url}/milestones', json=payload, headers=headers, verify=False).json()
-            print("Created Sprint 9")
-            
-        sprint_id = sprint9['id']
-        
-        # 2. Create User Story
-        us_payload = {
-            "project": proj_id, 
-            "subject": "Deep Containerization and Zabbix Telemetry Overhaul", 
-            "description": "Split the monolith into isolated Docker containers (App, MySQL, Ollama, Ingest) and configure Zabbix trigger dependencies (App Failure depends on DB Failure).", 
-            "milestone": sprint_id
-        }
-        res = requests.post(f'{base_url}/userstories', json=us_payload, headers=headers, verify=False).json()
-        us_id = res['id']
-        print(f"Created US: TG-{res['ref']}")
-        
-        # 3. Create Tasks
-        tasks = [
-            "Centralize docker-compose.yml with individual component services",
-            "Integrate NVIDIA GPU support for Ollama container",
-            "Update App and Ingest Dockerfiles to include SNMP telemetry packages",
-            "Write Zabbix API script to create App -> MySQL trigger dependencies",
-            "Sync Git repository and update Taiga tracking"
-        ]
-        
-        for task_subject in tasks:
-            t_payload = {"project": proj_id, "subject": task_subject, "user_story": us_id, "milestone": sprint_id}
-            t_res = requests.post(f'{base_url}/tasks', json=t_payload, headers=headers, verify=False).json()
-            print(f"Created Task: TG-{t_res['ref']}")
-            
-        print("Successfully synchronized with Taiga.")
-        
-    except Exception as e:
-        print(f"Error syncing to Taiga: {e}")
-
-if __name__ == "__main__":
-    sync()

+ 0 - 79
sync_taiga.py

@@ -1,79 +0,0 @@
-import os
-import requests
-import urllib3
-from datetime import datetime, timedelta
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-
-# Authenticate
-base_url = 'https://192.168.130.161/taiga/api/v1'
-auth_url = f'{base_url}/auth'
-auth = requests.post(auth_url, json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-proj_id = 21
-
-# 1. Sync Milestones (Sprints 1-8)
-print("Syncing Sprints/Milestones...")
-existing_m_res = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
-existing_sprints = {m['name']: m for m in existing_m_res}
-
-start_date = datetime(2026, 4, 16)
-for i in range(1, 9):
-    sprint_name = f"Sprint {i}"
-    sprint_start = start_date + timedelta(weeks=i-1)
-    sprint_end = sprint_start + timedelta(days=6)
-    
-    if sprint_name not in existing_sprints:
-        payload = {
-            "project": proj_id,
-            "name": sprint_name,
-            "estimated_start": sprint_start.strftime('%Y-%m-%d'),
-            "estimated_finish": sprint_end.strftime('%Y-%m-%d')
-        }
-        res = requests.post(f'{base_url}/milestones', json=payload, headers=headers, verify=False)
-        if res.status_code == 201:
-            print(f"Created {sprint_name}")
-        else:
-            print(f"Failed to create {sprint_name}: {res.text}")
-
-# 2. Sync Wiki
-print("\nSyncing Wiki pages...")
-for filename in os.listdir('taiga_wiki'):
-    if not filename.endswith('.md'):
-        continue
-    
-    filepath = os.path.join('taiga_wiki', filename)
-    with open(filepath, 'r') as f:
-        content = f.read()
-        
-    slug = filename.replace('.md', '').lower().replace('_', '-')
-    
-    # Check if exists
-    res = requests.get(f'{base_url}/wiki/by_slug?slug={slug}&project={proj_id}', headers=headers, verify=False)
-    if res.status_code == 200:
-        # Update
-        page_id = res.json()['id']
-        version = res.json()['version']
-        payload = {
-            "project": proj_id,
-            "slug": slug,
-            "content": content,
-            "version": version
-        }
-        update_res = requests.put(f'{base_url}/wiki/{page_id}', json=payload, headers=headers, verify=False)
-        print(f"Updated wiki '{slug}' (status {update_res.status_code})")
-        if update_res.status_code != 200:
-            print(update_res.text)
-    else:
-        # Create
-        payload = {
-            "project": proj_id,
-            "slug": slug,
-            "content": content
-        }
-        create_res = requests.post(f'{base_url}/wiki', json=payload, headers=headers, verify=False)
-        print(f"Created wiki '{slug}' (status {create_res.status_code})")
-        if create_res.status_code != 201:
-            print(create_res.text)
-
-print("\nDone syncing to Taiga Local Food AI project!")

+ 0 - 39
taiga_audit.py

@@ -1,39 +0,0 @@
-import requests
-import urllib3
-import json
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-
-base_url = 'https://192.168.130.161/taiga/api/v1'
-auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-
-proj_id = 21
-
-print("--- TAIGA AUDIT REPORT ---")
-# 1. Sprints Check
-milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
-print(f"Total Sprints: {len(milestones)}")
-for m in milestones:
-    us_list = m.get('user_stories', [])
-    if not us_list:
-        print(f"WARNING: Sprint '{m['name']}' has NO User Stories.")
-    else:
-        # Get tasks for sprint
-        tasks = requests.get(f'{base_url}/tasks?project={proj_id}&milestone={m["id"]}', headers=headers, verify=False).json()
-        if not tasks:
-            print(f"WARNING: Sprint '{m['name']}' has User Stories but NO Tasks.")
-
-# 2. User Stories Check
-us_all = requests.get(f'{base_url}/userstories?project={proj_id}', headers=headers, verify=False).json()
-for us in us_all:
-    tasks = requests.get(f'{base_url}/tasks?project={proj_id}&user_story={us["id"]}', headers=headers, verify=False).json()
-    if not tasks:
-        print(f"WARNING: User Story '{us['subject']}' (ID: {us['id']}) has NO Tasks.")
-
-# 3. Wiki Check
-wiki_pages = requests.get(f'{base_url}/wiki?project={proj_id}', headers=headers, verify=False).json()
-print("\n--- WIKI PAGES ---")
-for wp in wiki_pages:
-    print(f"- {wp['slug']}")
-

+ 0 - 91
taiga_sync_fixer.py

@@ -1,91 +0,0 @@
-import requests, urllib3
-urllib3.disable_warnings()
-
-# Configuration – adjust as needed
-TAIGA_URL = 'https://192.168.130.161/taiga/api/v1'
-USERNAME = 'lanfr1904@outlook.com'
-PASSWORD = 'BTSai123'
-PROJECT_ID = 21
-DEFAULT_TASK_SUBJECT = 'Auto‑generated task (define details)'
-DEFAULT_POINTS = 1
-
-# Authenticate and obtain token
-auth_resp = requests.post(
-    f'{TAIGA_URL}/auth',
-    json={'type': 'normal', 'username': USERNAME, 'password': PASSWORD},
-    verify=False
-).json()
-
-token = auth_resp.get('auth_token')
-if not token:
-    raise RuntimeError('Authentication to Taiga failed')
-headers = {'Authorization': f'Bearer {token}'}
-
-# Helper functions
-def get_user_stories():
-    resp = requests.get(
-        f'{TAIGA_URL}/userstories?project={PROJECT_ID}',
-        headers=headers,
-        verify=False
-    )
-    resp.raise_for_status()
-    return resp.json()
-
-def get_tasks_for_us(us_id):
-    resp = requests.get(
-        f'{TAIGA_URL}/tasks?user_story={us_id}',
-        headers=headers,
-        verify=False
-    )
-    resp.raise_for_status()
-    return resp.json()
-
-def create_task(us_id):
-    payload = {
-        'subject': DEFAULT_TASK_SUBJECT,
-        'user_story': us_id,
-        'project': PROJECT_ID,
-        'status': 101  # Status 101 = "New" for project 21
-    }
-    resp = requests.post(
-        f'{TAIGA_URL}/tasks',
-        json=payload,
-        headers=headers,
-        verify=False
-    )
-    if not resp.ok:
-        print("Error creating task:", resp.text)
-    resp.raise_for_status()
-    return resp.json()
-
-def set_points(us_id, points, version):
-    payload = {
-        'total_points': points,
-        'version': version
-    }
-    resp = requests.patch(
-        f'{TAIGA_URL}/userstories/{us_id}',
-        json=payload,
-        headers=headers,
-        verify=False
-    )
-    if not resp.ok:
-        print("Error setting points:", resp.text)
-    resp.raise_for_status()
-    return resp.json()
-
-def main():
-    us_list = get_user_stories()
-    for us in us_list:
-        # 1️⃣ Ensure at least one task exists
-        tasks = get_tasks_for_us(us['id'])
-        if not tasks:
-            print(f"US #{us['ref']} missing tasks – creating default task")
-            create_task(us['id'])
-        # 2️⃣ Ensure story has points
-        if not us.get('total_points'):
-            print(f"US #{us['ref']} missing points – setting to {DEFAULT_POINTS}")
-            set_points(us['id'], DEFAULT_POINTS, us['version'])
-
-if __name__ == '__main__':
-    main()

+ 0 - 27
taiga_sync_fixes.py

@@ -1,27 +0,0 @@
-import requests, urllib3
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-base_url = 'https://192.168.130.161/taiga/api/v1'
-auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-proj_id = 21
-
-# Fetch sprint 8
-milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
-sprint8 = next((m for m in milestones if m['name'] == 'Sprint 8'), None)
-sprint_id = sprint8['id'] if sprint8 else None
-
-if sprint_id:
-    payload = {
-        "project": proj_id,
-        "subject": "Sprint 8 Final Bug Fixes & Polish",
-        "description": "Implemented dynamic Help Sections in UI, upgraded LLM to Llama3 with strict anti-hallucination prompts, dynamic Pandas Styler limit caps, MyPlate bug fixes, and null-macro product filters.",
-        "milestone": sprint_id
-    }
-    res = requests.post(f'{base_url}/userstories', json=payload, headers=headers, verify=False)
-    if res.status_code == 201:
-        us = res.json()
-        print(f"Created Fixes US: {us['subject']}")
-        t_payload = {"project": proj_id, "subject": "Execute Bug Fixes", "user_story": us['id'], "milestone": sprint_id}
-        requests.post(f'{base_url}/tasks', json=t_payload, headers=headers, verify=False)
-    else:
-        print(f"Failed US: {res.text}")

+ 0 - 73
taiga_sync_sprint11_p2.py

@@ -1,73 +0,0 @@
-import requests
-import urllib3
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-
-base_url = 'https://192.168.130.161/taiga/api/v1'
-auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-proj_id = 21
-
-print("Fetching Sprints...")
-milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
-sprint11 = next((m for m in milestones if m['name'] == 'Sprint 11'), None)
-
-if not sprint11:
-    print("Sprint 11 not found! Exiting.")
-    exit(1)
-    
-sprint_id = sprint11['id']
-print(f"Sprint 11 ID: {sprint_id}")
-
-stories = [
-    {"subject": "AI Dietary Restriction SQL Enforcement", "description": "Dynamically map User EAV profiles (Diabetes, Kosher, Halal, Christian) directly to SQL WHERE clauses to enforce medical boundaries before AI generation."},
-    {"subject": "Zabbix Database Ingestion Telemetry", "description": "Build python script to fetch exact row counts from products_core and push SNMP traps to Zabbix Server 192.168.130.170."}
-]
-
-for s in stories:
-    payload = {
-        "project": proj_id,
-        "subject": s["subject"],
-        "description": s["description"],
-        "milestone": sprint_id
-    }
-    res = requests.post(f'{base_url}/userstories', json=payload, headers=headers, verify=False)
-    if res.status_code == 201:
-        us = res.json()
-        print(f"Created US: {us['subject']}")
-        
-        # Create a task for it
-        t_payload = {
-            "project": proj_id,
-            "subject": f"Execute: {us['subject']}",
-            "user_story": us['id'],
-            "milestone": sprint_id
-        }
-        requests.post(f'{base_url}/tasks', json=t_payload, headers=headers, verify=False)
-
-print("New User Stories populated!")
-
-# Get Closed status IDs
-us_statuses = requests.get(f'{base_url}/userstory-statuses?project={proj_id}', headers=headers, verify=False).json()
-task_statuses = requests.get(f'{base_url}/task-statuses?project={proj_id}', headers=headers, verify=False).json()
-
-closed_us_status = next((s['id'] for s in us_statuses if s['is_closed']), None)
-closed_task_status = next((s['id'] for s in task_statuses if s['is_closed']), None)
-
-# Update User Stories
-us_list = requests.get(f'{base_url}/userstories?project={proj_id}&milestone={sprint_id}', headers=headers, verify=False).json()
-for us in us_list:
-    if us['status'] != closed_us_status:
-        payload = {"status": closed_us_status, "version": us['version']}
-        requests.patch(f'{base_url}/userstories/{us["id"]}', json=payload, headers=headers, verify=False)
-        print(f"Closed User Story: {us['subject']}")
-
-# Update Tasks
-tasks = requests.get(f'{base_url}/tasks?project={proj_id}&milestone={sprint_id}', headers=headers, verify=False).json()
-for task in tasks:
-    if task['status'] != closed_task_status:
-        payload = {"status": closed_task_status, "version": task['version']}
-        requests.patch(f'{base_url}/tasks/{task["id"]}', json=payload, headers=headers, verify=False)
-        print(f"Closed Task: {task['subject']}")
-
-print("Sprint 11 successfully closed!")

+ 0 - 10
taiga_wiki/00_Epics.md

@@ -1,10 +0,0 @@
-# Project Epics
-
-1. Environment & Infrastructure Setup
-2. Database Schema & User Security
-3. Data Ingestion Pipeline
-4. Advanced Text & Context Search
-5. Local LLM Integration (Ollama)
-6. Streamlit Chat Interface Development
-7. Testing & Refinement
-8. Production Deployment

+ 0 - 39
taiga_wiki/Sprint_1.md

@@ -1,39 +0,0 @@
-# Sprint 1
-
-**Sprint Tag**: Sprint_1
-**Story Points**: 1000
-**Members**: francois, evegi144
-
-## 2026/04/16 Planning
-- [x] Initialize Git Repo and configure AI History context.
-- [x] Setup Taiga Wiki and Backlog generation.
-- [x] Finalize `deploy.sh` and Database Setup (`init.sql`, `setup_db.py`).
-- [x] Data Ingestion Pipeline (`ingest_csv.py`, `convert_datatypes.py`).
-- [x] Build basic Streamlit Base App (`app.py`).
-
-### 2026/04/16 Daily Scrum
-- **evegi144**: 
-- **francois**: Set up git, database, and ingestion scripts.
-
-### 2026/04/17 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/04/18 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/04/19 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/04/20 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-## 2026/04/22 Review
-- **Review**: Successfully pushed all foundational files to Git and configured DB schemas.
-
-## 2026/04/22 Retrospective
-- **Retrospective**: Good velocity. Environment setup went smoothly.
-

+ 0 - 37
taiga_wiki/Sprint_2.md

@@ -1,37 +0,0 @@
-# Sprint 2
-
-**Sprint Tag**: Sprint_2
-**Story Points**: 1000
-**Members**: francois, evegi144
-
-## 2026/04/23 Planning
-- [x] Execute Sprint 2: Core Nutritional Database.
-- [x] Test and verify `ingest_csv.py` for CSV Pandas imports.
-- [x] Implement the Database Search tab in the Streamlit UI (`app.py`).
-
-### 2026/04/23 Daily Scrum
-- **evegi144**: 
-- **francois**: Verified Streamlit search views and Pandas ingestion pipeline.
-
-### 2026/04/24 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/04/25 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/04/26 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/04/27 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-## 2026/04/29 Review
-- **Review**: Data ingestion strategy and Streamlit search features are fully coded and finalized ahead of schedule.
-
-## 2026/04/29 Retrospective
-- **Retrospective**: Extremely efficient. By pre-building `app.py` search logic during Sprint 1, Sprint 2 was completed seamlessly.
-

+ 0 - 35
taiga_wiki/Sprint_3.md

@@ -1,35 +0,0 @@
-# Sprint 3
-
-**Sprint Tag**: Sprint_3
-**Story Points**: 1000
-**Members**: francois, evegi144
-
-## 2026/04/30 Planning
-- Planning notes...
-
-### 2026/04/30 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/05/01 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/05/02 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/05/03 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/05/04 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-## 2026/05/06 Review
-- Review notes...
-
-## 2026/05/06 Retrospective
-- Retrospective notes...
-

+ 0 - 35
taiga_wiki/Sprint_4.md

@@ -1,35 +0,0 @@
-# Sprint 4
-
-**Sprint Tag**: Sprint_4
-**Story Points**: 1000
-**Members**: francois, evegi144
-
-## 2026/05/07 Planning
-- Planning notes...
-
-### 2026/05/07 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/05/08 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/05/09 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/05/10 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/05/11 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-## 2026/05/13 Review
-- Review notes...
-
-## 2026/05/13 Retrospective
-- Retrospective notes...
-

+ 0 - 35
taiga_wiki/Sprint_5.md

@@ -1,35 +0,0 @@
-# Sprint 5
-
-**Sprint Tag**: Sprint_5
-**Story Points**: 1000
-**Members**: francois, evegi144
-
-## 2026/05/14 Planning
-- Planning notes...
-
-### 2026/05/14 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/05/15 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/05/16 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/05/17 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/05/18 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-## 2026/05/20 Review
-- Review notes...
-
-## 2026/05/20 Retrospective
-- Retrospective notes...
-

+ 0 - 35
taiga_wiki/Sprint_6.md

@@ -1,35 +0,0 @@
-# Sprint 6
-
-**Sprint Tag**: Sprint_6
-**Story Points**: 1000
-**Members**: francois, evegi144
-
-## 2026/05/21 Planning
-- Planning notes...
-
-### 2026/05/21 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/05/22 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/05/23 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/05/24 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/05/25 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-## 2026/05/27 Review
-- Review notes...
-
-## 2026/05/27 Retrospective
-- Retrospective notes...
-

+ 0 - 35
taiga_wiki/Sprint_7.md

@@ -1,35 +0,0 @@
-# Sprint 7
-
-**Sprint Tag**: Sprint_7
-**Story Points**: 1000
-**Members**: francois, evegi144
-
-## 2026/05/28 Planning
-- Planning notes...
-
-### 2026/05/28 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/05/29 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/05/30 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/05/31 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/06/01 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-## 2026/06/03 Review
-- Review notes...
-
-## 2026/06/03 Retrospective
-- Retrospective notes...
-

+ 0 - 35
taiga_wiki/Sprint_8.md

@@ -1,35 +0,0 @@
-# Sprint 8
-
-**Sprint Tag**: Sprint_8
-**Story Points**: 1000
-**Members**: francois, evegi144
-
-## 2026/06/04 Planning
-- Planning notes...
-
-### 2026/06/04 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/06/05 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/06/06 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/06/07 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-### 2026/06/08 Daily Scrum
-- **evegi144**: 
-- **francois**: 
-
-## 2026/06/10 Review
-- Review notes...
-
-## 2026/06/10 Retrospective
-- Retrospective notes...
-

+ 0 - 39
taiga_wiki_260508.py

@@ -1,39 +0,0 @@
-import requests
-import urllib3
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-base_url = 'https://192.168.130.161/taiga/api/v1'
-auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-proj_id = 21
-
-slug = '260508-daily'
-content = """# Daily Scrum 26.05.08
-
-## What was done yesterday?
-- Addressed application crashes caused by missing columns (`search_limit`) and tables (`products_core`).
-- Discovered that the DB drop destroyed the entire schema temporarily until the offline ingestion recreated it, causing UI crashes.
-- Implemented `ON DUPLICATE KEY UPDATE` consolidation logic to fix the duplication explosion that degraded search performance.
-
-## What is the plan for today?
-- Ensure the initialization SQL officially defines all vertical partitions (`products_core`, `products_macros`, etc.) so the DB structure exists safely before the offline ingestion completes.
-- Lock down LLM model initialization into the Docker network `command` argument to strictly decouple the 1.3GB model download from the Streamlit UI loading phase.
-- Finalize Food Scale standardizations (`xl`, `l`, `m`, `s`) in `unit_converter.py`.
-
-## Blockers
-- **Data Race Condition**: The LLM model stream download crashed the AI interface when requested immediately on app startup. Fixed by detaching the download to the container orchestrator.
-- **Table Missing Error**: Streamlit `app.py` querying `products_core` before Python `to_sql` had a chance to create it. Fixed by explicitly declaring schemas in `init.sql`.
-"""
-
-payload = {"project": proj_id, "slug": slug, "content": content}
-res = requests.post(f'{base_url}/wiki', json=payload, headers=headers, verify=False)
-if res.status_code == 201:
-    print("Created 260508-daily page!")
-else:
-    # Try put
-    check_req = requests.get(f'{base_url}/wiki?project={proj_id}&slug={slug}', headers=headers, verify=False).json()
-    if len(check_req) > 0:
-        page_id = check_req[0]['id']
-        version = check_req[0]['version']
-        res2 = requests.put(f'{base_url}/wiki/{page_id}', json={"project": proj_id, "slug": slug, "content": content, "version": version}, headers=headers, verify=False)
-        print("Updated 260508-daily page!")

+ 0 - 32
taiga_wiki_bookmarks.py

@@ -1,32 +0,0 @@
-import requests, urllib3
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-
-base_url = 'https://192.168.130.161/taiga/api/v1'
-auth_url = f'{base_url}/auth'
-auth = requests.post(auth_url, json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-proj_id = 21
-
-bookmarks = [
-    {"title": "26.04.30 PLAN (Sprint Planning)", "href": "26-04-30-plan"},
-    {"title": "26.04.30 DAILY (Daily Scrum)", "href": "26-04-30-daily"},
-    {"title": "26.04.30 REVIEW (Sprint Review)", "href": "26-04-30-review"},
-    {"title": "26.04.30 RETROSPECTIVE (Sprint Retrospective)", "href": "26-04-30-retrospective"},
-    {"title": "26.04.30 ARTIFACT (Artifacts Used)", "href": "26-04-30-artifact"}
-]
-
-# Get current links to calculate order
-existing = requests.get(f'{base_url}/wiki-links?project={proj_id}', headers=headers, verify=False).json()
-max_order = max([e['order'] for e in existing]) if existing else 0
-
-for i, b in enumerate(bookmarks):
-    payload = {
-        "project": proj_id,
-        "title": b["title"],
-        "href": b["href"],
-        "order": max_order + i + 1
-    }
-    r = requests.post(f'{base_url}/wiki-links', json=payload, headers=headers, verify=False)
-    print(f'Created Bookmark {b["title"]}: {r.status_code}')
-    if r.status_code != 201:
-        print(r.text)

+ 0 - 67
taiga_wiki_may07.py

@@ -1,67 +0,0 @@
-# $Id$
-# $Author$
-# $log$
-import requests
-import urllib3
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-
-base_url = 'https://192.168.130.161/taiga/api/v1'
-auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-
-proj_id = 21
-
-wiki_content = {
-    "260507-daily": {
-        "content": "# 26.05.07 DAILY SCRUM\n\n## What did you do yesterday?\n- Configured the Nginx reverse proxy to run on port 80.\n- Wrote automated bash scripts (`data_sync.sh` and `backup_db.sh`) for data freshness and disaster recovery.\n\n## What will you do today?\n- Fix the `import time` scope error that crashed the UI timers.\n- Inject Git `$Id$` version tracking across the entire codebase using `.gitattributes`.\n- Push all the final Scrum documentation to the Taiga Wiki and sync it locally to `/docs`.\n\n## Are there any impediments?\n- Git keyword expansion requires adding a `.gitattributes` file and replacing `os.path.getmtime` with `$Id$` in the Streamlit UI."
-    },
-    "260507-review": {
-        "content": "# 26.05.07 SPRINT REVIEW\n\n## Sprint 10 Goal\nOptimize performance to remove SQL query freezing, and securely monitor the final architecture.\n\n## Demonstration\n- **Subquery First Strategy:** The Streamlit app no longer freezes during Clinical Data Search or Plate Builder. Queries execute in ~0.040 seconds.\n- **Teams Integration:** The Zabbix Webhook successfully transmitted the `Hello World` test message to the Microsoft Teams channel.\n- **Nginx Proxy:** The application is now natively served via HTTP port 80, handling WebSocket Upgrades perfectly.\n\n## Feedback\n- The UI execution timers are a great touch, proving the backend optimizations were successful.\n- Project is stable and ready for final Git commit and documentation handoff."
-    },
-    "260507-retrospective": {
-        "content": "# 26.05.07 SPRINT RETROSPECTIVE\n\n## What went well?\n- Identifying the Cartesian explosion in MySQL caused by duplicate `code` entries in the OpenFoodFacts datasets.\n- Utilizing standard Nginx configurations to correctly map Streamlit WebSockets securely.\n\n## What could be improved?\n- The initial implementation of `time.time()` was accidentally scoped inside an email function, causing a `NameError`. Better unit testing before pushing to production would catch these scope errors.\n- Git commit messages lacked Taiga `TG-XXX` tags, requiring a retroactive script to sync the Taiga board.\n\n## Action Items\n- Use `TG-XXX` in all future Git commit messages.\n- Ensure `import` statements are strictly maintained at the top of Python modules."
-    },
-    "260507-plan": {
-        "content": "# 26.05.07 SPRINT PLANNING (Day 2 Operations)\n\n## Goal\nTransition from Active Development to Day 2 Operations, focusing on infrastructure hardening and documentation.\n\n## Selected User Stories\n1. **Git Identity Keywords:** Inject `$Id$` headers and `.gitattributes` for native Git versioning.\n2. **Documentation Mirror:** Extract all Taiga Wiki Scrum pages and architectural documentation into a static `docs/` repository for Git syncing.\n3. **Final Report Generation:** Author a comprehensive report outlining what was accomplished and charting the course for future maintenance."
-    },
-    "devops-deploiement": {
-        "content": "# DEVOPS & DÉPLOIEMENT\n\n## Docker Architecture\nThe project utilizes `docker-compose` to orchestrate 4 core containers:\n1. `app` (Streamlit Python UI)\n2. `mysql` (Database Backend)\n3. `nginx` (Reverse Proxy on Port 80 handling WebSockets)\n4. `ingest` (Ephemeral offline Data Ingestion Container)\n\n## Automated Cron Jobs (Day 2 Operations)\nTo ensure system stability over time, two Bash scripts must be configured in the host's `crontab`:\n\n### 1. Data Freshness (`data_sync.sh`)\nSyncs the OpenFoodFacts CSV files. Supports `--online` for `wget` scraping or offline mode for processing locally dropped files.\n\n### 2. Disaster Recovery (`backup_db.sh`)\nExecutes a `mysqldump` directly from the MySQL container, compressing the output to `gzip`. Enforces a strict 7-day retention policy to prevent storage exhaustion.\n\n## Git Versioning\nAll files utilize the `ident` property within `.gitattributes`, injecting real-time Git SHA-1 hashes into file `$Id$` variables for precise version tracking in production."
-    },
-    "architecture-technologies": {
-        "content": "# ARCHITECTURE & TECHNOLOGIES\n\n## Frontend\n- **Streamlit (v1.30+)**: Handles all UI routing and data presentation asynchronously.\n\n## Backend Data\n- **MySQL 8.0**: Features robust horizontal table partitioning across the massive OpenFoodFacts dataset. Queries are heavily optimized using a \"Subquery-First\" limiting strategy to prevent Cartesian explosions during `LEFT JOIN` operations.\n\n## AI Inference Engine\n- **Ollama**: Hosted locally via Docker. Utilizes the **`llama3.1`** model exclusively, as the updated 3.1 architecture supports native API Tool Calling schemas (JSON output), which the Clinical RAG system relies heavily upon to search the MySQL database.\n\n## Monitoring & Alerting\n- **Zabbix**: Actively monitors Docker network health, SNMP traps, and Nginx reverse proxy HTTP codes.\n- **Microsoft Teams Integration**: Zabbix dynamically pushes critical alerts to a designated Microsoft Teams channel using a Python-configured Webhook MediaType."
-    }
-}
-
-for slug, data in wiki_content.items():
-    check_req = requests.get(f'{base_url}/wiki?project={proj_id}&slug={slug}', headers=headers, verify=False)
-    
-    if check_req.status_code == 200:
-        wiki_pages = check_req.json()
-        if len(wiki_pages) > 0:
-            page_id = wiki_pages[0]['id']
-            version = wiki_pages[0]['version']
-            payload = {
-                "project": proj_id,
-                "slug": slug,
-                "content": data["content"],
-                "version": version
-            }
-            res = requests.put(f'{base_url}/wiki/{page_id}', json=payload, headers=headers, verify=False)
-            if res.status_code == 200:
-                print(f"Updated Wiki Page: {slug}")
-            else:
-                print(f"Failed to update {slug}: {res.text}")
-            continue
-
-    # If it doesn't exist, create it
-    payload = {
-        "project": proj_id,
-        "slug": slug,
-        "content": data["content"]
-    }
-    res = requests.post(f'{base_url}/wiki', json=payload, headers=headers, verify=False)
-    if res.status_code == 201:
-        print(f"Created Wiki Page: {slug}")
-    else:
-        print(f"Failed to create {slug}: {res.text}")

+ 0 - 38
taiga_wiki_push.py

@@ -1,38 +0,0 @@
-import requests
-import urllib3
-import os
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-base_url = 'https://192.168.130.161/taiga/api/v1'
-auth_url = f'{base_url}/auth'
-auth = requests.post(auth_url, json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-proj_id = 21
-
-def push_wiki(slug, md_path):
-    with open(md_path, 'r') as f:
-        content = f.read()
-    
-    res = requests.get(f'{base_url}/wiki?project={proj_id}&slug={slug}', headers=headers, verify=False).json()
-    if len(res) > 0:
-        wiki_id = res[0]['id']
-        version = res[0]['version']
-        payload = {'content': content, 'version': version, 'project': proj_id, 'slug': slug}
-        r = requests.put(f'{base_url}/wiki/{wiki_id}', json=payload, headers=headers, verify=False)
-        print(f'Updated {slug}: {r.status_code}')
-        if r.status_code != 200:
-            print(r.text)
-    else:
-        payload = {'project': proj_id, 'slug': slug, 'content': content}
-        r = requests.post(f'{base_url}/wiki', json=payload, headers=headers, verify=False)
-        print(f'Created {slug}: {r.status_code}')
-        if r.status_code != 201:
-            print(r.text)
-
-# In Taiga, the home page of the wiki is usually 'home'
-push_wiki('home', 'docs/Wiki_Home.md')
-push_wiki('26-04-30-plan', 'docs/Scrum_Plan.md')
-push_wiki('26-04-30-daily', 'docs/Scrum_Daily.md')
-push_wiki('26-04-30-review', 'docs/Scrum_Review.md')
-push_wiki('26-04-30-retrospective', 'docs/Scrum_Retro.md')
-push_wiki('26-04-30-artifact', 'docs/Scrum_Artifacts.md')

+ 0 - 39
taiga_wiki_rename.py

@@ -1,39 +0,0 @@
-import requests, urllib3
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-
-base_url = 'https://192.168.130.161/taiga/api/v1'
-auth_url = f'{base_url}/auth'
-auth = requests.post(auth_url, json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-proj_id = 21
-
-# Fetch all existing links
-existing = requests.get(f'{base_url}/wiki-links?project={proj_id}', headers=headers, verify=False).json()
-
-# Delete the ones I made previously that had parentheses
-for link in existing:
-    if "26.04.30" in link['title']:
-        requests.delete(f'{base_url}/wiki-links/{link["id"]}', headers=headers, verify=False)
-        print(f"Deleted old link: {link['title']}")
-
-bookmarks = [
-    {"title": "26.04.30 PLAN", "href": "26-04-30-plan"},
-    {"title": "26.04.30 DAILY", "href": "26-04-30-daily"},
-    {"title": "26.04.30 REVIEW", "href": "26-04-30-review"},
-    {"title": "26.04.30 RETROSPECTIVE", "href": "26-04-30-retrospective"},
-    {"title": "26.04.30 ARTIFACT", "href": "26-04-30-artifact"}
-]
-
-# Calculate fresh order
-existing = requests.get(f'{base_url}/wiki-links?project={proj_id}', headers=headers, verify=False).json()
-max_order = max([e['order'] for e in existing]) if existing else 0
-
-for i, b in enumerate(bookmarks):
-    payload = {
-        "project": proj_id,
-        "title": b["title"],
-        "href": b["href"],
-        "order": max_order + i + 1
-    }
-    r = requests.post(f'{base_url}/wiki-links', json=payload, headers=headers, verify=False)
-    print(f'Created Bookmark {b["title"]}: {r.status_code}')

+ 0 - 16
task_plan.md

@@ -1,16 +0,0 @@
-# Suivi du Projet & Roadmap
-
-## Phase 1 : Initialisation [EN COURS]
-- [x] Configuration de l'environnement (Git, Venv)
-- [x] Création des fichiers AGENTS.md et README.md
-- [x] Config MCP Taiga (Vérifié via `list_projects`)
-
-- [ ] Liaison Taiga-Git testée (TG-1)
-
-## Phase 2 : Développement Coeur [À FAIRE]
-- [x] Création du script Python principal
-- [x] Mise en place des tests unitaires
-- [x] Automatisation via scripts Shell
-
-## Notes & Idées
-- *Idée : Ajouter un script de backup automatique vers Taiga.

+ 0 - 46
test_login.py

@@ -1,46 +0,0 @@
-import bcrypt
-import myloginpath
-import pymysql
-
-def test_login(username, password):
-    conf = myloginpath.parse('app_auth')
-    conn = pymysql.connect(
-        host=conf.get('host', '127.0.0.1'),
-        user=conf.get('user'),
-        password=conf.get('password'),
-        database='food_db',
-        cursorclass=pymysql.cursors.DictCursor
-    )
-    with conn.cursor() as cursor:
-        cursor.execute("SELECT password_hash FROM users WHERE username = %s LIMIT 1", (username,))
-        result = cursor.fetchone()
-    conn.close()
-    if result:
-        return bcrypt.checkpw(password.encode('utf-8'), result['password_hash'].encode('utf-8'))
-    return False
-
-# Try registering a user and testing it
-def register_and_test():
-    conf = myloginpath.parse('app_auth')
-    conn = pymysql.connect(
-        host=conf.get('host', '127.0.0.1'),
-        user=conf.get('user'),
-        password=conf.get('password'),
-        database='food_db',
-        cursorclass=pymysql.cursors.DictCursor
-    )
-    username = "test_bot"
-    password = "bot_password"
-    hashed = bcrypt.hashpw(password.encode('utf-8'), bcrypt.gensalt()).decode('utf-8')
-    try:
-        with conn.cursor() as cursor:
-            cursor.execute("INSERT INTO users (username, password_hash, email) VALUES (%s, %s, %s)", (username, hashed, 'bot@bot.com'))
-            conn.commit()
-    except Exception as e:
-        print("Insert failed:", e)
-    conn.close()
-    
-    print("Login successful?", test_login(username, password))
-
-if __name__ == "__main__":
-    register_and_test()

+ 0 - 3
test_snmp.py

@@ -1,3 +0,0 @@
-from snmp_notifier import notifier
-notifier.send_alert('TEST ALERT: Application Monitoring Verified!')
-print("Alert sent.")

+ 0 - 40
update_bookmarks.py

@@ -1,40 +0,0 @@
-import requests
-import urllib3
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-base_url = 'https://192.168.130.161/taiga/api/v1'
-auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-proj_id = 21
-
-slug = 'home'
-check_req = requests.get(f'{base_url}/wiki?project={proj_id}&slug={slug}', headers=headers, verify=False)
-if check_req.status_code == 200:
-    wiki_pages = check_req.json()
-    if len(wiki_pages) > 0:
-        page_id = wiki_pages[0]['id']
-        version = wiki_pages[0]['version']
-        content = wiki_pages[0]['content']
-        
-        # Append the new links if they aren't there
-        new_links = """
-- [26.05.07 DAILY](260507-daily)
-- [26.05.07 REVIEW](260507-review)
-- [26.05.07 RETROSPECTIVE](260507-retrospective)
-- [26.05.07 PLAN](260507-plan)
-"""
-        if "260507-daily" not in content:
-            content += new_links
-            payload = {
-                "project": proj_id,
-                "slug": slug,
-                "content": content,
-                "version": version
-            }
-            res = requests.put(f'{base_url}/wiki/{page_id}', json=payload, headers=headers, verify=False)
-            if res.status_code == 200:
-                print("Bookmarks updated successfully!")
-            else:
-                print(f"Failed to update bookmarks: {res.text}")
-        else:
-            print("Bookmarks already up to date.")

+ 0 - 55
update_taiga_status.py

@@ -1,55 +0,0 @@
-import requests
-import urllib3
-
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-base_url = 'https://192.168.130.161/taiga/api/v1'
-
-def close_sprint_tasks():
-    try:
-        # Authenticate
-        auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-        headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-        proj_id = 21
-
-        # 1. Get Milestone (Sprint 9)
-        milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
-        sprint9 = next((m for m in milestones if m['name'] == 'Sprint 9'), None)
-        
-        if not sprint9:
-            print("Sprint 9 not found!")
-            return
-            
-        sprint_id = sprint9['id']
-        
-        # 2. Get 'Closed' Task Status ID
-        statuses = requests.get(f'{base_url}/task-statuses?project={proj_id}', headers=headers, verify=False).json()
-        closed_status = next((s for s in statuses if s['is_closed']), None)
-        if not closed_status:
-            print("Could not find a 'Closed' task status for the project.")
-            return
-        
-        closed_status_id = closed_status['id']
-        print(f"Found Closed Status ID: {closed_status_id}")
-
-        # 3. Get all tasks for Sprint 9
-        tasks = requests.get(f'{base_url}/tasks?project={proj_id}&milestone={sprint_id}', headers=headers, verify=False).json()
-        
-        # 4. Close Tasks
-        for task in tasks:
-            if task['status'] != closed_status_id:
-                payload = {
-                    "status": closed_status_id,
-                    "version": task['version']
-                }
-                res = requests.patch(f'{base_url}/tasks/{task["id"]}', json=payload, headers=headers, verify=False).json()
-                print(f"Closed Task TG-{task['ref']}: {task['subject']}")
-            else:
-                print(f"Task TG-{task['ref']} already closed.")
-                
-        print("Successfully updated all Taiga tasks to Closed!")
-        
-    except Exception as e:
-        print(f"Error updating Taiga: {e}")
-
-if __name__ == "__main__":
-    close_sprint_tasks()

+ 0 - 7
wiki_links_test.py

@@ -1,7 +0,0 @@
-import requests, urllib3
-urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
-base_url = 'https://192.168.130.161/taiga/api/v1'
-auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
-headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
-res = requests.get(f'{base_url}/wiki-links?project=21', headers=headers, verify=False)
-print(res.status_code, res.text)