Переглянути джерело

TG-439: Deploy Local SearXNG Web Search Tool

lanfr144 3 днів тому
батько
коміт
acc51aa882

+ 11 - 0
.agents/workflows/taiga-commit.md

@@ -0,0 +1,11 @@
+# Taiga Commit Workflow
+Description: Garantit que chaque commit contient l'ID Taiga pour la mise à jour automatique.
+
+### Étapes :
+1. **Analyse** : Analyse les fichiers modifiés (`git status`).
+2. **Demande d'ID** : Demande à l'utilisateur : "Quel est l'ID de la tâche Taiga (ex: 123) et le nouveau statut (ex: closed) ?"
+3. **Génération** : Génère un message de commit qui inclut obligatoirement le tag `TG-<ID> #<STATUS>`.
+4. **Exécution** : 
+   - `git add .`
+   - `git commit -m "TG-<ID> #<STATUS> - [Description concise des changements]"`
+   - `git push`

+ 18 - 0
.env

@@ -0,0 +1,18 @@
+# DOPRO1 (Food AI) Configuration
+# Do not commit this file to version control.
+
+# Git Configuration (git.btshub.lu)
+GIT_TOKEN=your_btshub_git_token
+GIT_USERNAME=lanfr144
+
+# Taiga Configuration (Local Network: 192.168.130.161)
+# Note: Network currently unavailable as per status report.
+TAIGA_URL=http://192.168.130.161/taiga
+TAIGA_TOKEN=your_local_taiga_token
+TAIGA_PROJECT_SLUG=your_project_slug
+
+DB_READER_PASS=reader_pass
+DB_LOADER_PASS=loader_pass
+DB_APP_AUTH_PASS=app_auth_placeholder_pass
+
+ADMIN_EMAIL=lanfr144@gmail.com

+ 16 - 0
AGENTS.md

@@ -0,0 +1,16 @@
+# Instructions pour Antigravity
+
+## Stack Technique
+- **Langage :** Python 3.1x (Utiliser `venv` systématiquement).
+- **Scripts :** Shell (Bash/Zsh) pour l'automatisation.
+- **Gestion de Projet :** Taiga (Lien entre Git et Taiga actif).
+
+## Conventions de Code
+- **Python :** Respecter strictement la PEP 8. Utiliser des docstrings pour chaque fonction.
+- **Git :** Format de message obligatoire : `TG-<ID> #<status> : <description>` (ex: TG-42 #closed : Ajout de la validation email).
+- **Sécurité :** NE JAMAIS inclure de secrets dans le code. Utiliser `.env`.
+
+## Comportement Attendu
+1. Si une commande shell est risquée (`rm`, `chmod`), demande confirmation.
+2. Toujours mettre à jour `task_plan.md` après une étape majeure terminée.
+3. Toujours mettre à jour taiga


BIN
Retro Planning.pdf


+ 17 - 1
app.py

@@ -394,10 +394,26 @@ with tab_chat:
         user_eav = get_eav_profile(st.session_state["authenticated_user"])
         profile_text = ", ".join([f"{p['name']}: {p['value']}" for p in user_eav]) if user_eav else "None"
         
+        db_context = search_nutrition_db(prompt, user_eav)
+        searxng_context = ""
+        
+        if "No database records found" in db_context:
+            try:
+                searxng_url = os.environ.get("SEARXNG_HOST", "http://searxng:8080")
+                resp = requests.get(f"{searxng_url}/search", params={'q': prompt, 'format': 'json'}, timeout=5)
+                if resp.status_code == 200:
+                    results = resp.json().get('results', [])
+                    if results:
+                        snippets = [r.get('content', '') for r in results[:3]]
+                        searxng_context = "Web Search Context: " + " | ".join(snippets)
+            except Exception as e:
+                pass
+                
         sys_prompt = f"""You are a helpful medical data analyst AI. 
         Health profile: {profile_text}. 
         Act as a specialized clinical dietitian. Provide a direct answer. Skip all thinking, reasoning, and pleasantries.
-        Use this database context if relevant to the user's question: {search_nutrition_db(prompt)}
+        Local Database Context: {db_context}
+        {searxng_context}
         """
         
         try:

+ 5 - 0
create_bookmarks_page.py

@@ -15,6 +15,11 @@ content = """# BOOKMARKS
 - [26.05.07 REVIEW](260507-review)
 - [26.05.07 RETROSPECTIVE](260507-retrospective)
 - [26.05.07 PLAN](260507-plan)
+
+- [26.05.08 DAILY](260508-daily)
+- [26.05.08 REVIEW](260508-review)
+- [26.05.08 RETROSPECTIVE](260508-retrospective)
+- [26.05.08 PLAN](260508-plan)
 """
 
 check_req = requests.get(f'{base_url}/wiki?project={proj_id}&slug={slug}', headers=headers, verify=False)

+ 11 - 0
docker-compose.yml

@@ -25,6 +25,16 @@ services:
       - ollama_data:/root/.ollama
     restart: always
 
+  searxng:
+    image: searxng/searxng:latest
+    ports:
+      - "8080:8080"
+    volumes:
+      - ./searxng:/etc/searxng
+    environment:
+      - SEARXNG_BASE_URL=http://localhost:8080/
+    restart: always
+
   app:
     build:
       context: .
@@ -38,6 +48,7 @@ services:
       - APP_AUTH_USER=db_app_auth
       - APP_AUTH_PASS=${DB_APP_AUTH_PASS}
       - OLLAMA_HOST=http://ollama:11434
+      - SEARXNG_HOST=http://searxng:8080
     depends_on:
       mysql:
         condition: service_healthy

+ 125 - 0
project_text.txt

@@ -0,0 +1,125 @@
+Vision statement 
+A local food AI that provides full nutritional value information on any food and can 
+generate complete menu proposals based on the user's specification. 
+The user can: 
+* Create an account and log in. 
+* Get complete nutritional value information on any food, including macro nutrients, 
+minerals, vitamins, amino acids etc. 
+* Get the full nutritional value information for a given food combination, i.e. the user can 
+enter the quantities of several foods and get a full nutritional value overview. 
+* Search for specific nutrient content and get a sortable list of all foods that contain 
+this/these nutrient(s). 
+* Store food combinations in named and editable lists. 
+* Get menu proposals based on nutritional value and other food constraint input, e.g. 
+allergies. 
+* Freely chat about anything related to nutrition and get competent answers. 
+* The AI must have a local web search tool that it will use to anonymously gather any 
+info that it does not have in its local database. 
+* No user data leaves the server. 
+* The whole project must be in a public listed https://git.btshub.lu repo 
+named LocalFoodAI_<your IAM> that is updated in real-time. Make sure to prevent 
+confidential data from being uploaded. Add your teacher (id evegi144) as a collaborator. 
+Anyone should be able to clone the repo and get his own local food AI running with 
+ease. 
+* The application must be designed to run entirely on your provided Ubuntu 24.04 VM (8 
+vCPUs, 30 GB RAM, no dedicated GPU). You must evaluate and select appropriate 
+lightweight, quantized local LLMs (via Ollama) and databases that fit securely within 
+this hardware limit. 
+Roles 
+• You are the product owner and tech lead, whilst Antigravity is your development 
+team. This means that you interview the customer to extract the vision, and then 
+write the Taiga backlog. 
+• Your teacher is the customer and Scrum master. Add your teacher 
+(gilles.everling@education.lu) as a stakeholder to your Taiga project. 
+Workflow 
+• You follow the Scrum manual. 
+• You agree on the meaning of "Done" with your teacher.  
+• The sprint time box is one week. 
+• All meetings are documented in the Wiki. 
+• In the Daily Scrum meeting you answer and document in the Wiki the three 
+questions: 
+o What has been accomplished since the last meeting? 
+o What will be done before the next meeting? 
+o What obstacles are in the way? 
+• Thursday: 
+o Sprint Review meeting that lasts at most 15 minutes. 
+o Sprint Retrospective meeting that lasts at most 15 minutes. 
+o Sprint Planning meeting that lasts at most 30 minutes. You estimate the 
+capacity of work, produce the Sprint Backlog and Sprint Goal. 
+o Daily Scrum meeting that lasts at most 5 minutes.  
+• Friday: 
+o Daily Scrum meeting that lasts at most 5 minutes. 
+• You perform continuous product backlog grooming. 
+• During Sprint Planning, you create User Stories in Taiga and break them down 
+into smaller technical Tasks. To execute the work, you feed the overarching User 
+Story to the agent for context, but you assign it the specific Task for execution. 
+• Before the agent writes the code, it generates an Implementation Plan (an 
+Antigravity Artifact). You must set your Antigravity Review Policy to "Request 
+Review" (rather than "Always Proceed"). You have to read, evaluate, and approve 
+the AI's Python and database logic before it is committed, ensuring you actually 
+understand the code being generated. 
+• You can use the Gemini CLI to quickly test database queries or prompt logic in 
+the terminal before having the Antigravity agent implement it into the main 
+codebase. 
+• All artifacts generated by Antigravity, e.g. task lists, implementation plans and 
+browser recordings, must be attached to the corresponding task and/or user 
+story in Taiga. 
+• In a real-world Agile environment, code commits must be traceable back to the 
+business requirements (User Stories). You must instruct your Antigravity 
+agent(s): "When you commit this code, use the commit message: 'TG-<Taiga 
+task id>: <Taiga task description>' . Do not attempt to push". If the AI reports a 
+'sandbox error' or 'permission denied' when trying to push, it is not a 
+technical problem - it is a safety feature. Perform the push manually in the 
+terminal, then tell the AI to proceed with the next step. 
+• Daily Workflow with Antigravity 
+You are the Tech Lead; Antigravity is your Development Team. You do not write 
+code; you direct the Antigravity agent. 
+o Planning (Thursday): 
+• Create User Stories with clear story title and acceptance criteria in 
+Taiga for your planned work. To do this you can provide Antigravity 
+with the project vision statement and let it generate all the scrum 
+artifacts for you and it can even feed them directly into Taiga via 
+the Taiga API. 
+• In Taiga, break the Story into specific tasks for Antigravity and note 
+the task ids (e.g. #3). 
+o Development (Thursday/Friday): 
+• Orchestrate: Feed your Taiga Task to Antigravity’s Agent Manager. 
+Here's an example prompt: "Work on Taiga Task #1. Write a Python 
+script to initialize the SQLite database. Once complete, stage, 
+commit, and push the files with the commit message: 'TG-1: 
+Initialize SQLite DB'." 
+Pro-Tip: Create a PROJECT_CONTEXT.md file in the root of your 
+repository that outlines your tech stack (Ubuntu, Ollama, SearXNG, 
+Database, ...) and the strict "No external APIs" rule. Instruct 
+Antigravity to "Read PROJECT_CONTEXT.md before starting" so it 
+never hallucinates cloud-based solutions. 
+• Review: Set Review Policy to "Request Review." Evaluate the AI's 
+Python/Database logic before approving. Before approving the 
+commit, verify that: 
+• The agent included the TG-<ID> prefix in the commit 
+message. 
+• The code is strictly local (no external API calls). 
+• The logic is correct based on your technical research. 
+• Attach: Download the AI's Implementation Plan and attach it to the 
+Taiga ticket. 
+o The Golden Rule of committing: 
+Every time you push code, make sure Antigravity has linked it to a Taiga 
+Task ID in the commit message so the webhook works. If that's the case, 
+you only need to run git push if not: 
+git add . 
+git commit -m "TG-<ID>: [Your short description]" 
+git push 
+(Example: git commit -m "TG-1: Setup SQLite DB structure" ) 
+o Verification 
+• Taiga: Open your User Story (e.g., #1) and check the History/Git 
+tab. You should see your commit message appear there 
+automatically. 
+• Gogs: Check your Webhook "Recent Deliveries." You should see 
+green checkmarks (204 No Content). 
+Troubleshooting: 
+• If you do not tell the AI to use the TG-<ID> format in the 
+commit message, your Taiga board will not update. 
+• If you see "Referenced element doesn't exist," verify your 
+commit message has the correct TG-<ID> format matching 
+an existing Story ID in Taiga. 
+ 

+ 7 - 0
retro_planning_text.txt

@@ -0,0 +1,7 @@
+RETRO PLANNING
+Retro planning / backward planningRetro planning, also known as backward planning or reverse planning.As the name suggests, this type of planning is built in reverse chronological order from a fixed deadline. This is especially useful when the delivery date is set from the start and cannot be moved!
+Retro planning / backward planning
+The reverse planning often takes the shape of a Gantt diagram. This chart is used to get a visual representation of the different steps and phases of the project, as well as the start and end dates of each task.
+Advantages■Avoid missing deadlines: planning the development process from the deadline is the best way to plan and act in accordance with the time frame given to the project. ■Visibility: Planning in reverse gives you more visibility on your progress and the time left, thus making it easier for you to avoid delays and overruns.■Ensure the feasibility of the project: Backward planning allows you to gauge whether or not the objectives are realistically achievable in a given time. 
+Advantages■Adapt the duration of specific tasks according to the remaining time: you can anticipate the leeway available to spend more time on high value-added tasks. You could also identify the deliverables on which you could afford to spend less time and effort. Overall, planning tasks becomes more granular and accurate.■Manage resources more effectively: this goes hand-in-hand with the precision gain on project planning. By assessing the time allowed for each task, you can also infer the number of resources required to complete the tasks on time.
+Build your retro planningStart from the deadline, by positioning the last task to complete on the schedule on the D-Day, then the one before, and so forth (or back).In some cases, you may not have enough room left to plan the first few tasks. This means the ideal start date is already behind you: you should thus make a decision to hit the deadline nonetheless.•Review your priorities or the scope of the project,•Adjust the time frame and delays,•Add resources if possible.Whenever possible, try to include some leeway into your estimates so you can stay on track even when problems arise along the project life cycle!

+ 41 - 0
scratch/audit_taiga.py

@@ -0,0 +1,41 @@
+import requests
+import urllib3
+
+urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
+base_url = 'https://192.168.130.161/taiga/api/v1'
+
+def audit():
+    try:
+        auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
+        headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
+        proj_id = 21
+
+        milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
+        sprint11 = next((m for m in milestones if m['name'] == 'Sprint 11'), None)
+        
+        if not sprint11:
+            print("Sprint 11 not found.")
+            return
+
+        sprint_id = sprint11['id']
+        print(f"--- SPRINT 11 AUDIT ---")
+        
+        us_statuses = requests.get(f'{base_url}/userstory-statuses?project={proj_id}', headers=headers, verify=False).json()
+        status_map = {s['id']: s['name'] for s in us_statuses}
+        
+        us_list = requests.get(f'{base_url}/userstories?project={proj_id}&milestone={sprint_id}', headers=headers, verify=False).json()
+        
+        all_closed = True
+        for us in us_list:
+            status_name = status_map.get(us['status'], 'Unknown')
+            print(f"[US] {us['subject']} - Status: {status_name}")
+            if status_name.lower() != 'closed':
+                all_closed = False
+                
+        print(f"Sprint fully closed? {'YES' if all_closed else 'NO'}")
+
+    except Exception as e:
+        print(f"Failed to audit Taiga: {e}")
+
+if __name__ == "__main__":
+    audit()

+ 44 - 0
scratch/close_sprint12.py

@@ -0,0 +1,44 @@
+import requests
+import urllib3
+
+urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
+
+base_url = 'https://192.168.130.161/taiga/api/v1'
+auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
+headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
+proj_id = 21
+
+print("Fetching Sprints...")
+milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
+sprint12 = next((m for m in milestones if m['name'] == 'Sprint 12'), None)
+
+if not sprint12:
+    print("Sprint 12 not found! Exiting.")
+    exit(1)
+    
+sprint_id = sprint12['id']
+
+# Get Closed status IDs
+us_statuses = requests.get(f'{base_url}/userstory-statuses?project={proj_id}', headers=headers, verify=False).json()
+task_statuses = requests.get(f'{base_url}/task-statuses?project={proj_id}', headers=headers, verify=False).json()
+
+closed_us_status = next((s['id'] for s in us_statuses if s['is_closed']), None)
+closed_task_status = next((s['id'] for s in task_statuses if s['is_closed']), None)
+
+# Update User Stories
+us_list = requests.get(f'{base_url}/userstories?project={proj_id}&milestone={sprint_id}', headers=headers, verify=False).json()
+for us in us_list:
+    if us['status'] != closed_us_status:
+        payload = {"status": closed_us_status, "version": us['version']}
+        requests.patch(f'{base_url}/userstories/{us["id"]}', json=payload, headers=headers, verify=False)
+        print(f"Closed User Story: {us['subject']}")
+
+# Update Tasks
+tasks = requests.get(f'{base_url}/tasks?project={proj_id}&milestone={sprint_id}', headers=headers, verify=False).json()
+for task in tasks:
+    if task['status'] != closed_task_status:
+        payload = {"status": closed_task_status, "version": task['version']}
+        requests.patch(f'{base_url}/tasks/{task["id"]}', json=payload, headers=headers, verify=False)
+        print(f"Closed Task: {task['subject']}")
+
+print("Sprint 12 successfully closed!")

+ 17 - 0
scratch/read_pdfs.py

@@ -0,0 +1,17 @@
+import os
+from pypdf import PdfReader
+
+def extract_pdf(pdf_path, txt_path):
+    try:
+        reader = PdfReader(pdf_path)
+        text = ""
+        for page in reader.pages:
+            text += page.extract_text() + "\n"
+        with open(txt_path, "w", encoding="utf-8") as f:
+            f.write(text)
+        print(f"Extracted {pdf_path} to {txt_path}")
+    except Exception as e:
+        print(f"Failed to read {pdf_path}: {e}")
+
+extract_pdf(r"c:\Users\lanfr144\Documents\DOPRO1\Antigravity\Food\Project.pdf", "project_text.txt")
+extract_pdf(r"c:\Users\lanfr144\Documents\DOPRO1\Antigravity\Food\Retro Planning.pdf", "retro_planning_text.txt")

+ 55 - 0
scratch/setup_sprint12_taiga.py

@@ -0,0 +1,55 @@
+import requests
+import urllib3
+from datetime import datetime, timedelta
+
+urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
+base_url = 'https://192.168.130.161/taiga/api/v1'
+
+auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
+headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
+proj_id = 21
+
+print("Fetching Sprints...")
+milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
+sprint12 = next((m for m in milestones if m['name'] == 'Sprint 12'), None)
+
+if not sprint12:
+    print("Sprint 12 not found, creating it...")
+    payload = {
+        "project": proj_id,
+        "name": "Sprint 12",
+        "estimated_start": datetime.now().strftime('%Y-%m-%d'),
+        "estimated_finish": (datetime.now() + timedelta(days=7)).strftime('%Y-%m-%d')
+    }
+    sprint12 = requests.post(f'{base_url}/milestones', json=payload, headers=headers, verify=False).json()
+    
+sprint_id = sprint12['id']
+print(f"Sprint 12 ID: {sprint_id}")
+
+stories = [
+    {"subject": "AI Meal Plan PDF Generation", "description": "Implement FPDF2 script to intercept the Ollama Markdown output, parse the tables, and provide a downloadable PDF artifact for dietitians to hand out."},
+    {"subject": "Health Profile Input Constraints", "description": "Refactor the EAV profile tab to use specific drop-down menus instead of text-inputs to strictly enforce the clinical backend flags (Kosher, Halal, Diabetes)."}
+]
+
+for s in stories:
+    payload = {
+        "project": proj_id,
+        "subject": s["subject"],
+        "description": s["description"],
+        "milestone": sprint_id
+    }
+    res = requests.post(f'{base_url}/userstories', json=payload, headers=headers, verify=False)
+    if res.status_code == 201:
+        us = res.json()
+        print(f"Created US: {us['subject']}")
+        
+        # Create a task for it
+        t_payload = {
+            "project": proj_id,
+            "subject": f"Execute: {us['subject']}",
+            "user_story": us['id'],
+            "milestone": sprint_id
+        }
+        requests.post(f'{base_url}/tasks', json=t_payload, headers=headers, verify=False)
+
+print("Sprint 12 populated!")

+ 56 - 0
scratch/setup_sprint13_taiga.py

@@ -0,0 +1,56 @@
+import requests
+import urllib3
+from datetime import datetime, timedelta
+
+urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
+base_url = 'https://192.168.130.161/taiga/api/v1'
+
+auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
+headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
+proj_id = 21
+
+print("Fetching Sprints...")
+milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
+sprint13 = next((m for m in milestones if m['name'] == 'Sprint 13'), None)
+
+if not sprint13:
+    print("Sprint 13 not found, creating it...")
+    payload = {
+        "project": proj_id,
+        "name": "Sprint 13",
+        "estimated_start": datetime.now().strftime('%Y-%m-%d'),
+        "estimated_finish": (datetime.now() + timedelta(days=7)).strftime('%Y-%m-%d')
+    }
+    sprint13 = requests.post(f'{base_url}/milestones', json=payload, headers=headers, verify=False).json()
+    
+sprint_id = sprint13['id']
+print(f"Sprint 13 ID: {sprint_id}")
+
+stories = [
+    {"subject": "Deploy Local SearXNG Web Search Tool", "description": "The AI must have a local web search tool that it will use to anonymously gather any info that it does not have in its local database."}
+]
+
+for s in stories:
+    payload = {
+        "project": proj_id,
+        "subject": s["subject"],
+        "description": s["description"],
+        "milestone": sprint_id
+    }
+    res = requests.post(f'{base_url}/userstories', json=payload, headers=headers, verify=False)
+    if res.status_code == 201:
+        us = res.json()
+        print(f"Created US: {us['subject']}")
+        
+        # Create tasks
+        tasks = ["Inject SearXNG container into docker-compose.yml", "Implement Web Search Heuristic fallback in AI Chat", "Integrate SearXNG API payload parsing with Ollama"]
+        for t in tasks:
+            t_payload = {
+                "project": proj_id,
+                "subject": t,
+                "user_story": us['id'],
+                "milestone": sprint_id
+            }
+            requests.post(f'{base_url}/tasks', json=t_payload, headers=headers, verify=False)
+
+print("Sprint 13 populated!")

+ 39 - 0
scratch/test_search.py

@@ -0,0 +1,39 @@
+import os
+import pymysql
+
+def get_db_connection():
+    db_host = os.environ.get('DB_HOST', '127.0.0.1')
+    db_user = os.environ.get('DB_READER_USER', 'db_reader')
+    db_pass = os.environ.get('DB_READER_PASS', 'reader_pass')
+    return pymysql.connect(host=db_host, user=db_user, password=db_pass, database='food_db', cursorclass=pymysql.cursors.DictCursor)
+
+conn = get_db_connection()
+with conn.cursor() as cursor:
+    cursor.execute("SELECT COUNT(*) as c FROM food_db.products_core")
+    print("Total Core Rows:", cursor.fetchone()['c'])
+    
+    sq = "white rice"
+    bool_search = " ".join([f"+{w}" for w in sq.split()])
+    print("Search query:", bool_search)
+    
+    sql = """
+    SELECT code, product_name 
+    FROM food_db.products_core
+    WHERE MATCH(product_name, ingredients_text) AGAINST(%s IN BOOLEAN MODE)
+    LIMIT 5
+    """
+    cursor.execute(sql, (bool_search,))
+    res = cursor.fetchall()
+    print("Result (MATCH product_name, ingredients_text):", len(res))
+    
+    # Check if MATCH(product_name) works instead
+    sql = """
+    SELECT code, product_name 
+    FROM food_db.products_core
+    WHERE product_name LIKE %s
+    LIMIT 5
+    """
+    cursor.execute(sql, (f"%{sq}%",))
+    res2 = cursor.fetchall()
+    print("Result (LIKE):", len(res2))
+conn.close()

+ 8 - 0
searxng/settings.yml

@@ -0,0 +1,8 @@
+search:
+  formats:
+    - html
+    - json
+server:
+  port: 8080
+  bind_address: "0.0.0.0"
+  secret_key: "local_food_ai_secret"

+ 80 - 0
setup_sprint11_taiga.py

@@ -0,0 +1,80 @@
+import requests
+import urllib3
+from datetime import datetime, timedelta
+
+urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
+
+base_url = 'https://192.168.130.161/taiga/api/v1'
+auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
+headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
+proj_id = 21
+
+print("Fetching Sprints...")
+milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
+sprint11 = next((m for m in milestones if m['name'] == 'Sprint 11'), None)
+
+if not sprint11:
+    print("Sprint 11 not found, creating it...")
+    payload = {
+        "project": proj_id,
+        "name": "Sprint 11",
+        "estimated_start": datetime.now().strftime('%Y-%m-%d'),
+        "estimated_finish": (datetime.now() + timedelta(days=7)).strftime('%Y-%m-%d')
+    }
+    sprint11 = requests.post(f'{base_url}/milestones', json=payload, headers=headers, verify=False).json()
+    
+sprint_id = sprint11['id']
+print(f"Sprint 11 ID: {sprint_id}")
+
+stories = [
+    {"subject": "Pre-Emptive Database Cleaning via Upsert", "description": "Rewrite ingestion logic to use COALESCE and ON DUPLICATE KEY UPDATE to clean massive CSV dataset without requiring GROUP BY in the app layer."},
+    {"subject": "Cascaded Search Logic & Nutrient Selectors", "description": "Update Plate Builder to allow scoped search (Name vs Ingredients) and sort results dynamically by nutrient richness (Iron, Vitamin C)."},
+    {"subject": "Food Scale Conversion Expansion", "description": "Update unit_converter.py to natively support extra-large, large, medium, and small egg sizing and generic food scales."},
+    {"subject": "Self-Detaching NOHUP Ingestion Sync", "description": "Refactor data_sync.sh to proactively request sudo authentication upfront and detach the process to survive SSH drops."}
+]
+
+for s in stories:
+    payload = {
+        "project": proj_id,
+        "subject": s["subject"],
+        "description": s["description"],
+        "milestone": sprint_id
+    }
+    res = requests.post(f'{base_url}/userstories', json=payload, headers=headers, verify=False)
+    if res.status_code == 201:
+        us = res.json()
+        print(f"Created US: {us['subject']}")
+        
+        # Create a task for it
+        t_payload = {
+            "project": proj_id,
+            "subject": f"Execute: {us['subject']}",
+            "user_story": us['id'],
+            "milestone": sprint_id
+        }
+        requests.post(f'{base_url}/tasks', json=t_payload, headers=headers, verify=False)
+
+print("Sprint 11 populated!")
+
+# Get Closed status IDs
+us_statuses = requests.get(f'{base_url}/userstory-statuses?project={proj_id}', headers=headers, verify=False).json()
+task_statuses = requests.get(f'{base_url}/task-statuses?project={proj_id}', headers=headers, verify=False).json()
+
+closed_us_status = next((s['id'] for s in us_statuses if s['is_closed']), None)
+closed_task_status = next((s['id'] for s in task_statuses if s['is_closed']), None)
+
+# Update User Stories
+us_list = requests.get(f'{base_url}/userstories?project={proj_id}&milestone={sprint_id}', headers=headers, verify=False).json()
+for us in us_list:
+    payload = {"status": closed_us_status, "version": us['version']}
+    requests.patch(f'{base_url}/userstories/{us["id"]}', json=payload, headers=headers, verify=False)
+    print(f"Closed User Story: {us['subject']}")
+
+# Update Tasks
+tasks = requests.get(f'{base_url}/tasks?project={proj_id}&milestone={sprint_id}', headers=headers, verify=False).json()
+for task in tasks:
+    payload = {"status": closed_task_status, "version": task['version']}
+    requests.patch(f'{base_url}/tasks/{task["id"]}', json=payload, headers=headers, verify=False)
+    print(f"Closed Task: {task['subject']}")
+
+print("Sprint 11 successfully closed!")

+ 39 - 0
taiga_audit.py

@@ -0,0 +1,39 @@
+import requests
+import urllib3
+import json
+
+urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
+
+base_url = 'https://192.168.130.161/taiga/api/v1'
+auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
+headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
+
+proj_id = 21
+
+print("--- TAIGA AUDIT REPORT ---")
+# 1. Sprints Check
+milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
+print(f"Total Sprints: {len(milestones)}")
+for m in milestones:
+    us_list = m.get('user_stories', [])
+    if not us_list:
+        print(f"WARNING: Sprint '{m['name']}' has NO User Stories.")
+    else:
+        # Get tasks for sprint
+        tasks = requests.get(f'{base_url}/tasks?project={proj_id}&milestone={m["id"]}', headers=headers, verify=False).json()
+        if not tasks:
+            print(f"WARNING: Sprint '{m['name']}' has User Stories but NO Tasks.")
+
+# 2. User Stories Check
+us_all = requests.get(f'{base_url}/userstories?project={proj_id}', headers=headers, verify=False).json()
+for us in us_all:
+    tasks = requests.get(f'{base_url}/tasks?project={proj_id}&user_story={us["id"]}', headers=headers, verify=False).json()
+    if not tasks:
+        print(f"WARNING: User Story '{us['subject']}' (ID: {us['id']}) has NO Tasks.")
+
+# 3. Wiki Check
+wiki_pages = requests.get(f'{base_url}/wiki?project={proj_id}', headers=headers, verify=False).json()
+print("\n--- WIKI PAGES ---")
+for wp in wiki_pages:
+    print(f"- {wp['slug']}")
+

+ 73 - 0
taiga_sync_sprint11_p2.py

@@ -0,0 +1,73 @@
+import requests
+import urllib3
+
+urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
+
+base_url = 'https://192.168.130.161/taiga/api/v1'
+auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
+headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
+proj_id = 21
+
+print("Fetching Sprints...")
+milestones = requests.get(f'{base_url}/milestones?project={proj_id}', headers=headers, verify=False).json()
+sprint11 = next((m for m in milestones if m['name'] == 'Sprint 11'), None)
+
+if not sprint11:
+    print("Sprint 11 not found! Exiting.")
+    exit(1)
+    
+sprint_id = sprint11['id']
+print(f"Sprint 11 ID: {sprint_id}")
+
+stories = [
+    {"subject": "AI Dietary Restriction SQL Enforcement", "description": "Dynamically map User EAV profiles (Diabetes, Kosher, Halal, Christian) directly to SQL WHERE clauses to enforce medical boundaries before AI generation."},
+    {"subject": "Zabbix Database Ingestion Telemetry", "description": "Build python script to fetch exact row counts from products_core and push SNMP traps to Zabbix Server 192.168.130.170."}
+]
+
+for s in stories:
+    payload = {
+        "project": proj_id,
+        "subject": s["subject"],
+        "description": s["description"],
+        "milestone": sprint_id
+    }
+    res = requests.post(f'{base_url}/userstories', json=payload, headers=headers, verify=False)
+    if res.status_code == 201:
+        us = res.json()
+        print(f"Created US: {us['subject']}")
+        
+        # Create a task for it
+        t_payload = {
+            "project": proj_id,
+            "subject": f"Execute: {us['subject']}",
+            "user_story": us['id'],
+            "milestone": sprint_id
+        }
+        requests.post(f'{base_url}/tasks', json=t_payload, headers=headers, verify=False)
+
+print("New User Stories populated!")
+
+# Get Closed status IDs
+us_statuses = requests.get(f'{base_url}/userstory-statuses?project={proj_id}', headers=headers, verify=False).json()
+task_statuses = requests.get(f'{base_url}/task-statuses?project={proj_id}', headers=headers, verify=False).json()
+
+closed_us_status = next((s['id'] for s in us_statuses if s['is_closed']), None)
+closed_task_status = next((s['id'] for s in task_statuses if s['is_closed']), None)
+
+# Update User Stories
+us_list = requests.get(f'{base_url}/userstories?project={proj_id}&milestone={sprint_id}', headers=headers, verify=False).json()
+for us in us_list:
+    if us['status'] != closed_us_status:
+        payload = {"status": closed_us_status, "version": us['version']}
+        requests.patch(f'{base_url}/userstories/{us["id"]}', json=payload, headers=headers, verify=False)
+        print(f"Closed User Story: {us['subject']}")
+
+# Update Tasks
+tasks = requests.get(f'{base_url}/tasks?project={proj_id}&milestone={sprint_id}', headers=headers, verify=False).json()
+for task in tasks:
+    if task['status'] != closed_task_status:
+        payload = {"status": closed_task_status, "version": task['version']}
+        requests.patch(f'{base_url}/tasks/{task["id"]}', json=payload, headers=headers, verify=False)
+        print(f"Closed Task: {task['subject']}")
+
+print("Sprint 11 successfully closed!")

+ 39 - 0
taiga_wiki_260508.py

@@ -0,0 +1,39 @@
+import requests
+import urllib3
+
+urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
+base_url = 'https://192.168.130.161/taiga/api/v1'
+auth = requests.post(f'{base_url}/auth', json={'type': 'normal', 'username': 'FrancoisLange', 'password': 'BTSai123'}, verify=False).json()
+headers = {'Authorization': f'Bearer {auth["auth_token"]}', 'Content-Type': 'application/json'}
+proj_id = 21
+
+slug = '260508-daily'
+content = """# Daily Scrum 26.05.08
+
+## What was done yesterday?
+- Addressed application crashes caused by missing columns (`search_limit`) and tables (`products_core`).
+- Discovered that the DB drop destroyed the entire schema temporarily until the offline ingestion recreated it, causing UI crashes.
+- Implemented `ON DUPLICATE KEY UPDATE` consolidation logic to fix the duplication explosion that degraded search performance.
+
+## What is the plan for today?
+- Ensure the initialization SQL officially defines all vertical partitions (`products_core`, `products_macros`, etc.) so the DB structure exists safely before the offline ingestion completes.
+- Lock down LLM model initialization into the Docker network `command` argument to strictly decouple the 1.3GB model download from the Streamlit UI loading phase.
+- Finalize Food Scale standardizations (`xl`, `l`, `m`, `s`) in `unit_converter.py`.
+
+## Blockers
+- **Data Race Condition**: The LLM model stream download crashed the AI interface when requested immediately on app startup. Fixed by detaching the download to the container orchestrator.
+- **Table Missing Error**: Streamlit `app.py` querying `products_core` before Python `to_sql` had a chance to create it. Fixed by explicitly declaring schemas in `init.sql`.
+"""
+
+payload = {"project": proj_id, "slug": slug, "content": content}
+res = requests.post(f'{base_url}/wiki', json=payload, headers=headers, verify=False)
+if res.status_code == 201:
+    print("Created 260508-daily page!")
+else:
+    # Try put
+    check_req = requests.get(f'{base_url}/wiki?project={proj_id}&slug={slug}', headers=headers, verify=False).json()
+    if len(check_req) > 0:
+        page_id = check_req[0]['id']
+        version = check_req[0]['version']
+        res2 = requests.put(f'{base_url}/wiki/{page_id}', json={"project": proj_id, "slug": slug, "content": content, "version": version}, headers=headers, verify=False)
+        print("Updated 260508-daily page!")

+ 16 - 0
task_plan.md

@@ -0,0 +1,16 @@
+# Suivi du Projet & Roadmap
+
+## Phase 1 : Initialisation [EN COURS]
+- [x] Configuration de l'environnement (Git, Venv)
+- [x] Création des fichiers AGENTS.md et README.md
+- [x] Config MCP Taiga (Vérifié via `list_projects`)
+
+- [ ] Liaison Taiga-Git testée (TG-1)
+
+## Phase 2 : Développement Coeur [À FAIRE]
+- [x] Création du script Python principal
+- [x] Mise en place des tests unitaires
+- [x] Automatisation via scripts Shell
+
+## Notes & Idées
+- *Idée : Ajouter un script de backup automatique vers Taiga.