PoC — Moodle GeniAI plugin (local_geniai) v2.3.6

Important: These PoCs are for an isolated, consented test environment only (e.g., local Bitnami Moodle Docker). Do not run tests against production or third-party systems. Use non-exfiltrative payloads (e.g., alert('stored-xss')) for public demonstrations. Redact any real usernames/hostnames in screenshots before publishing.


Summary

This PoC demonstrates three related vectors observed during testing of local_geniai v2.3.6:

  • Stored XSS via an uploaded PDF resource that is exposed as an unsanitized clickable link by the assistant.
  • Reflected XSS via chatbot input that is returned/rendered unsanitized in the chat UI.
  • LLM-assisted delivery / Prompt Injection where the assistant (LLM) returns HTML/raw links which can be used to deliver stored payloads to other users.

During testing the following public PDF collection was used as the uploaded file source (lab-only): https://github.com/luigigubello/PayloadsAllThePDFs/blob/main/pdf-payloads/starter_pack.pdf

Note: the screenshots below were captured in a local lab and show the behavior; before public disclosure sanitize/redact any identifying info.


Step by Step PoC

Goal: Demonstrate that a file uploaded by a Teacher can result in JS execution in other users’ browsers through an unsanitized assistant link.

Steps

  1. Start a local Moodle test instance (Bitnami Docker or similar).
  2. Install local_geniai v2.3.6.
  3. Create an account with Teacher role and another account with Student role.
  4. As Teacher, upload a test PDF (lab-only). In my test I used the public starter pack PDF referenced above.
  5. Confirm the assistant outputs a clickable link to the uploaded file.
  6. Log in as Student and click the assistant-provided link.

Expected result: A non-exfiltrative alert() (proof of script execution) appears in the Student’s browser when the file/link is opened.

Evidence (screenshots)

Add activity as “File”:

Upload a malicious PDF containing JavaScript payload.

Whenever you click the payload then you will notice the execution of JS payload.

alert(1)

If any other users reach there then they will encounter with malicious JS PDF macro.


B — Reflected XSS (chat input reflected unsanitized)

Goal: Show that chat input can be reflected by GeniAI and executed in the chat UI.

Steps

  1. Open the GeniAI chat interface with any user (student/admin/teacher).
  2. Send a safe, non-exfiltrative test string designed to show execution (in practice use "><img src=x onerror=alert(document.cookie)>, <script>alert(1)</script> or similar).
  3. Observe the chat UI rendering.

Expected result: The injected test payload triggers an JavaScript payload alert() in the chat UI.

Goal: Demonstrating that the assistant can be induced to return HTML anchors or raw links these can serve the stored XSS payload to other users.

First of all, the AI chatbot was GPT4o-mini by default.

Steps (lab only)

  1. After the stored XSS file is uploaded, ask the assistant (lab session) to list the most recently uploaded file as a clickable link.
    Example: Show me the link to the PDF resource.
  2. If the assistant returns an HTML anchor, a second user clicking that link can trigger the stored payload.

Expected result: Assistant returns link/html; clicking it triggers the stored-XSS alert() in another user’s browser.


Short one-paragraph PoC summary

  • Stored XSS: Teacher uploads PDF (lab file) → assistant returns unsanitized clickable link → Student opens link → alert() executes.

  • Reflected XSS: Malicious/ crafted chat input is reflected into chat UI unsanitized → immediate alert() in the chat.

  • Prompt Injection / LLM-delivery: Assistant returns raw HTML/link (lab) → link click by another user triggers stored payload.


Safety, disclosure & recommendations (MUST include)

  • Run these PoCs only in an isolated, consented test environment.
  • Use non-exfiltrative payloads (alert()) in any public PoC or screenshot. Do not publish or run payloads that exfiltrate cookies, tokens, or other secrets.
  • Before publishing screenshots, redact usernames, hostnames, internal IPs, and other sensitive data.
  • Recommended vendor fixes: server-side sanitization of all chatbot inputs/outputs (Moodle s()/format_text()), avoid returning raw HTML from LLM responses, serve uploaded files with Content-Disposition: attachment / X-Content-Type-Options: nosniff, and apply a strict CSP.

References & notes

  • PDF used in local lab: https://github.com/luigigubello/PayloadsAllThePDFs/blob/main/pdf-payloads/starter_pack.pdf (lab only — do not use payloads that exfiltrate in public).
  • Researcher: Onurcan Gençonurcangencbilkent@gmail.com