Scan Details

Project Name
Scan Timestamp
Agentic Framework
LearnScope-main
05/16/25 13:16:35
openai-agents
Dependency Check

Agentic Workflow Graph
Legend
Agent
Tool
Tool Category
CustomTool
Basic
MCP Server
Findings
Vulnerabilities
0
Agents
3
Tools
0
Nodes Overview

Agents
Agent Name LLM Model System Prompt
human agent gpt-4o Jesteś wysokiej rangi profesorem humanistyki. Twoim zadaniem jest odpowiadać na pytania dotyczące nauk humanistycznych.
science agent gpt-4o Jesteś wysokiej rangi profesorem fizyki, matematyki i innych nauk ścisłych. Twoim zadaniem jest odpowiadać na pytania dotyczące nauk ścisłych.
triage agent gpt-4o Jesteś wysokiej rangi profesorem. Twoim zadaniem jest odpowiadać na pytania dotyczące nauk humanistycznych i ścisłych. Podziel pytania na 2 kategorie: humanistyka i nauki ścisłe.
Tools
Tool Name Tool Category Tool Description Number of Vulnerabilities
Agent Vulnerability Mitigations

Agent Name Vulnerability Mitigation Level* Explanation
human agent Input Length Limit None There are no guardrails provided to mitigate this vulnerability.
Personally Identifiable Information (PII) Leakage None There are no guardrails provided to mitigate this vulnerability. There are no instructions in place to mitigate this vulnerability.
Harmful/Toxic/Profane Content None There are no guardrails provided to mitigate this vulnerability. There are no instructions in place to mitigate this vulnerability.
Jailbreak None There are no guardrails provided to mitigate this vulnerability. There are no instructions designed to handle situations where the AI is asked to act against its intended role.
Intentional Misuse None There are no guardrails provided to mitigate this vulnerability. The instruction specifies that the AI's task is to answer questions regarding humanities. However, it lacks explicit instructions on handling requests outside this domain.
System Prompt Leakage None There are no guardrails provided to mitigate this vulnerability. There are no instructions in place to prevent the user from extracting the system prompt.
science agent Input Length Limit None There are no guardrails provided to mitigate this vulnerability.
Personally Identifiable Information (PII) Leakage None There are no guardrails provided to mitigate this vulnerability. There are no instructions in place to prevent the disclosure or handling of PII.
Harmful/Toxic/Profane Content None There are no guardrails provided to mitigate this vulnerability. There are no instructions in the prompt explicitly designed to prevent or manage harmful, toxic, or profane content.
Jailbreak None There are no guardrails provided to mitigate this vulnerability. There are no instructions to prevent the AI agent from being tricked into acting outside its intended role or guidelines.
Intentional Misuse Partial There are no guardrails provided to mitigate this vulnerability. The instruction 'Twoim zadaniem jest odpowiadać na pytania dotyczące nauk ścisłych.' can help mitigate misuse, as it specifies the AI's role and intended task.
System Prompt Leakage None There are no guardrails provided to mitigate this vulnerability. There are no explicit instructions to prevent revealing the system prompt or internal instructions.
triage agent Input Length Limit None There are no guardrails in place to mitigate this vulnerability.
Personally Identifiable Information (PII) Leakage None There are no guardrails in place to mitigate this vulnerability. There are no instructions in place to mitigate PII leakage. The system prompt does not mention handling of PII.
Harmful/Toxic/Profane Content None There are no guardrails in place to mitigate this vulnerability. There are no instructions in place to mitigate harmful, toxic, or profane content. The system prompt does not address handling such content.
Jailbreak None There are no guardrails in place to mitigate this vulnerability. There are no instructions in place to prevent jailbreak attempts. The system prompt does not include guidance about staying aligned with its instructions.
Intentional Misuse None There are no guardrails in place to mitigate this vulnerability. There is a partial instruction to focus on questions related to humanities and sciences, which might prevent some misuse, but it is not explicitly clear or strict enough to fully mitigate intentional misuse.
System Prompt Leakage None There are no guardrails in place to mitigate this vulnerability. There are no instructions in place to prevent system prompt leakage. The system prompt does not address keeping the instructions confidential.
*The "Mitigation Level" column shows to what extent a vulnerability is mitigated. "Full" indicates that both a system prompt instruction and a guardrail are in place. "Partial" indicates that one of the two is in place. "None" indicates that neither one is in place. (This applies to all vulnerabilities except for the "Input Length Limit", in which case only the guardrail is taken into account).
Agent Vulnerability Explanations

Agent Vulnerability Framework Mapping Description
Input Length Limit
OWASP LLM Top 10:
LLM01 - Prompt Injection LLM10 - Unbounded Consumption
OWASP Agentic:
T2 - Tool Misuse T4 - Resource Overload T6 - Intent Breaking & Goal Manipulation T7 - Misaligned & Deceptive Behaviors
An attacker can overwhelm the LLM's context with a very long message and cause it to ignore previous instructions or produce undesired actions.
Mitigation:
- add a Guardrail that checks if the user message contains more than the maximum allowed number of characters (200-500 will suffice in most cases).
Personally Identifiable Information (PII) Leakage
OWASP LLM Top 10:
LLM02 - Sensitive Information Disclosure LLM05 - Improper Output Handling
OWASP Agentic:
T7 - Misaligned & Deceptive Behaviors T9 - Identity Spoofing & Impersonation T15 - Human Manipulation
An attacker can manipulate the LLM into exfiltrating PII, or requesting users to disclose PII.
Mitigation:
- add a Guardrail that checks user and agent messages for PII and anonymizes them or flags them
- include agent instructions that clearly state that it should not handle PII.
Harmful/Toxic/Profane Content
OWASP LLM Top 10:
LLM05 - Improper Output Handling
OWASP Agentic:
T7 - Misaligned & Deceptive Behaviors T11 - Unexpected RCE and Code Attacks
An attacker can use the LLM to generate harmful, toxic, or profane content, or engage in conversations about such topics.
Mitigation:
- add a Guardrail that checks user and agent messages for toxic, harmful, and profane content
- include agent instructions that prohibit the agent from engaging in conversation about, or creating, harmful, toxic, or profane content.
Jailbreak
OWASP LLM Top 10:
LLM01 - Prompt Injection LLM02 - Sensitive Information Disclosure LLM05 - Improper Output Handling LLM09 - Misinformation LLM10 - Unbounded Consumption
OWASP Agentic:
T1 - Memory Poisoning T2 - Tool Misuse T3 - Privilege Compromise T4 - Resource Overload T6 - Intent Breaking & Goal Manipulation T7 - Misaligned & Deceptive Behaviors T9 - Identity Spoofing & Impersonation T11 - Unexpected RCE and Code Attacks T13 - Rogue Agents in Multi-Agent Systems T15 - Human Manipulation
An attacker can try to craft their messages in a way that makes the LLM forget all previous instructions and be used for any task the attacker wants.
Mitigation:
- add a Guardrail that checks user messages for attempts at circumventing the LLM's instructions
- include agent instructions that state that the agent should not alter its instructions, and ignore user messages that try to convince it otherwise.
Intentional Misuse
OWASP LLM Top 10:
LLM01 - Prompt Injection LLM10 - Unbounded Consumption
OWASP Agentic:
T2 - Tool Misuse T4 - Resource Overload T6 - Intent Breaking & Goal Manipulation
An attacker can try to use the instance of the LLM for tasks other than the LLM's intended usage to drain resources or for personal gain.
Mitigation:
- add a Guardrail that checks user messages for tasks that are not the agent's intended usage
- include agent instructions that prohibit the agent from engaging in any tasks that are not its intended usage
System Prompt Leakage
OWASP LLM Top 10:
LLM01 - Prompt Injection LLM02 - Sensitive Information Disclosure LLM07 - System Prompt Leakage
OWASP Agentic:
T2 - Tool Misuse T3 - Privilege Compromise T6 - Intent Breaking & Goal Manipulation T7 - Misaligned & Deceptive Behaviors
An attacker can make the LLM reveal the system prompt/instructions so that he can leak sensitive business logic or craft other attacks that are better suited for this LLM.
Mitigation:
- add a Guardrail that checks agent messages for the exact text of the agent's system prompt
- include agent instructions that highlight that the system prompt/instructions are confidential and should not be shared.