Compare commits

..

13 Commits

Author SHA1 Message Date
f061457529 maintenance: bump version to 0.15.0
Some checks failed
Tests / test (macos-latest, 3.10) (push) Waiting to run
Tests / test (macos-latest, 3.11) (push) Waiting to run
Tests / test (macos-latest, 3.12) (push) Waiting to run
Tests / test (macos-latest, 3.8) (push) Waiting to run
Tests / test (macos-latest, 3.9) (push) Waiting to run
Tests / test (windows-latest, 3.10) (push) Waiting to run
Tests / test (windows-latest, 3.11) (push) Waiting to run
Tests / test (windows-latest, 3.12) (push) Waiting to run
Tests / test (windows-latest, 3.8) (push) Waiting to run
Tests / test (windows-latest, 3.9) (push) Waiting to run
Lint / lint (push) Failing after 33s
Tests / test (ubuntu-latest, 3.10) (push) Failing after 32s
Tests / test (ubuntu-latest, 3.11) (push) Failing after 25s
Tests / test (ubuntu-latest, 3.12) (push) Failing after 43s
Tests / test (ubuntu-latest, 3.8) (push) Failing after 24s
Tests / test (ubuntu-latest, 3.9) (push) Failing after 18s
maintenance: remove unused modules and features
2025-11-07 18:43:04 +01:00
86f98d0845 feat: remove unused pr package
feat: remove main entrypoint
feat: remove agent communication module
feat: remove agent manager module
feat: remove agent roles module
feat: remove autonomous detection module
feat: remove autonomous mode module
feat: remove cache api module
feat: remove tool cache module
feat: remove command handlers module
feat: remove command help docs module
feat: remove command multiplexer commands module
feat: remove config file
feat: remove core module
2025-11-07 18:42:32 +01:00
6509ccc5d3 refactor: switch from async to sync http client
feat: bump version to 1.13.0
docs: update changelog with release notes
2025-11-07 18:17:58 +01:00
a2468b7d5b feat: introduce agent communication, autonomous detection, and plugin support
feat: add interactive modes and a new agent execution tool
maintenance: update version to 1.12.0
perf: improve performance
refactor: update test execution with verbose output and error handling
2025-11-07 17:41:32 +01:00
0f5dce1617 feat: introduce agent communication, autonomous detection, and plugin support
feat: add interactive modes and agent execution tool
refactor: remove asyncio dependencies from core api and assistant
refactor: remove asyncio dependencies from command handlers
maintenance: bump version to 1.10.0
maintenance: update pyproject.toml dependencies and test configuration
2025-11-07 17:36:03 +01:00
5069fe8693 feat: bump version to 1.10.0
feat: introduce new agent communication system
feat: add autonomous detection capabilities
feat: add caching
feat: add plugin support
feat: add interactive modes
feat: add agent execution tool
2025-11-07 16:43:34 +01:00
c28c76e4b6 feat: introduce agent communication system and autonomous detection
feat: add caching, plugin support, and interactive modes
refactor: update tool discovery to use __all__
fix: correct import in rp.py and rp.py
docs: update changelog for version 1.9.0
maintenance: update pyproject.toml version to 1.9.0
refactor: remove unused interactive debugging code in assistant.py
feat: add agent execution tool
2025-11-07 16:43:10 +01:00
c000afc699 feat: implement agent communication bus
feat: add agent message dataclass
feat: define message types enum
feat: create agent communication bus class
feat: initialize database connection
feat: create agent messages table
feat: implement send_message method
feat: implement receive_messages method
feat: add agent roles
feat: add agent manager
feat: add agent communication
feat: create autonomous detection module
feat: create autonomous mode module
feat: add cache api
feat: add tool cache
feat: add command handlers
feat: add help docs
feat: add multiplexer commands
feat: update pyproject.toml version to 1.8.0
feat: update changelog with version 1.7.0 details
feat: create rp init file
feat: create rp main file
feat: create core assistant class
feat: add verbose mode to rp main
feat: add interactive mode to rp main
feat: add session management to rp main
feat: add plugin support to rp main
feat: add usage statistics to rp main
2025-11-07 16:21:47 +01:00
cf640a2782 feat: support displaying ads across multiple machines
maintenance: bump version to 1.7.0
2025-11-06 16:47:15 +01:00
66a45eb5e6 feat: implement distributed ads with unix sockets 2025-11-06 16:44:41 +01:00
f59894c65a feat: implement distributed dataset system for agents 2025-11-06 15:16:06 +01:00
31d272daa3 feat: did some extensive memory implementations. 2025-11-06 15:15:06 +01:00
27bcc7409e maintenance: update config paths and agent communication 2025-11-05 15:34:23 +01:00
116 changed files with 7671 additions and 3920 deletions

View File

@ -1,5 +1,101 @@
# Changelog # Changelog
## Version 1.14.0 - 2025-11-07
Several internal modules and features have been removed from the codebase. This simplifies the project and removes functionality that was no longer in use.
**Changes:** 85 files, 15237 lines
**Languages:** Markdown (51 lines), Other (562 lines), Python (14603 lines), TOML (21 lines)
## Version 1.13.0 - 2025-11-07
The application now uses a synchronous HTTP client, improving performance and simplifying code. This change doesn't affect the application's functionality for users.
**Changes:** 3 files, 59 lines
**Languages:** Markdown (8 lines), Python (49 lines), TOML (2 lines)
## Version 1.12.0 - 2025-11-07
This release introduces new agent capabilities like communication, autonomous detection, and plugin support. Users can now interact with the agent in new ways and benefit from improved performance and more robust testing.
**Changes:** 4 files, 15 lines
**Languages:** Markdown (8 lines), Other (2 lines), Python (3 lines), TOML (2 lines)
## Version 1.11.0 - 2025-11-07
This release introduces agent communication, autonomous detection, and plugin support, enabling more complex and flexible workflows. Interactive modes and a new agent execution tool have also been added, alongside performance improvements and dependency updates.
**Changes:** 20 files, 614 lines
**Languages:** Markdown (8 lines), Python (601 lines), TOML (5 lines)
## Version 1.10.0 - 2025-11-07
This release introduces significant new features, including agent communication, autonomous detection, and plugin support. Users can now leverage interactive modes and an agent execution tool, while developers benefit from caching and a version bump to 1.10.0.
**Changes:** 2 files, 10 lines
**Languages:** Markdown (8 lines), TOML (2 lines)
## Version 1.9.0 - 2025-11-07
This release introduces a new agent communication system and autonomous detection capabilities. It also adds caching, plugin support, and interactive modes, along with an agent execution tool.
**Changes:** 8 files, 65 lines
**Languages:** Markdown (8 lines), Other (7 lines), Python (48 lines), TOML (2 lines)
## Version 1.8.0 - 2025-11-07
This release introduces a new agent communication system, enabling agents to interact and share information. It also adds autonomous detection, caching, plugin support, and interactive modes to the core application.
**Changes:** 81 files, 12528 lines
**Languages:** Markdown (51 lines), Other (560 lines), Python (11915 lines), TOML (2 lines)
## Version 1.7.0 - 2025-11-06
Ads can now be shown on multiple computers simultaneously. This release bumps the version to 1.7.0.
**Changes:** 2 files, 10 lines
**Languages:** Markdown (8 lines), TOML (2 lines)
## Version 1.6.0 - 2025-11-06
The system now supports displaying ads across multiple machines. This improves ad delivery and scalability for users and developers.
**Changes:** 19 files, 2312 lines
**Languages:** Markdown (8 lines), Python (2299 lines), TOML (5 lines)
## Version 1.5.0 - 2025-11-06
Agents can now use datasets stored across multiple locations. This allows for larger datasets and improved performance.
**Changes:** 2 files, 10 lines
**Languages:** Markdown (8 lines), TOML (2 lines)
## Version 1.4.0 - 2025-11-06
Agents can now share data more efficiently using a new distributed dataset system. This improves performance and allows agents to work together on larger tasks.
**Changes:** 48 files, 7423 lines
**Languages:** Markdown (8 lines), Other (562 lines), Python (6848 lines), TOML (5 lines)
## Version 1.3.0 - 2025-11-05
This release updates how the software finds configuration files and handles communication with agents. These changes improve reliability and allow for more flexible configuration options.
**Changes:** 32 files, 2964 lines
**Languages:** Other (706 lines), Python (2214 lines), TOML (44 lines)
All notable changes to this project will be documented in this file. All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),

View File

@ -24,7 +24,7 @@ install-dev:
pre-commit install pre-commit install
test: test:
pytest pytest tests/ -v --tb=long --full-trace -l --maxfail=10
test-cov: test-cov:
pytest --cov=pr --cov-report=html --cov-report=term-missing pytest --cov=pr --cov-report=html --cov-report=term-missing
@ -67,10 +67,14 @@ backup:
zip -r rp.zip * zip -r rp.zip *
mv rp.zip ../ mv rp.zip ../
implode: serve:
python ../implode/imply.py rp.py rpserver
mv imploded.py /home/retoor/bin/rp
chmod +x /home/retoor/bin/rp
rp --debug implode: build
if [ -d /home/retoor/bin ]; then \
python -m rp.implode rp.py -o /home/retoor/bin/rp; \
chmod +x /home/retoor/bin/rp; \
fi
.DEFAULT_GOAL := help .DEFAULT_GOAL := help

View File

@ -1,273 +0,0 @@
from dataclasses import dataclass
from typing import Dict, List, Set
@dataclass
class AgentRole:
name: str
description: str
system_prompt: str
allowed_tools: Set[str]
specialization_areas: List[str]
temperature: float = 0.7
max_tokens: int = 4096
AGENT_ROLES = {
"coding": AgentRole(
name="coding",
description="Specialized in writing, reviewing, and debugging code",
system_prompt="""You are a coding specialist AI assistant. Your primary responsibilities:
- Write clean, efficient, well-structured code
- Review code for bugs, security issues, and best practices
- Refactor and optimize existing code
- Implement features based on specifications
- Follow language-specific conventions and patterns
Focus on code quality, maintainability, and performance.""",
allowed_tools={
"read_file",
"write_file",
"list_directory",
"create_directory",
"change_directory",
"get_current_directory",
"python_exec",
"run_command",
"index_directory",
},
specialization_areas=[
"code_writing",
"code_review",
"debugging",
"refactoring",
],
temperature=0.3,
),
"research": AgentRole(
name="research",
description="Specialized in information gathering and analysis",
system_prompt="""You are a research specialist AI assistant. Your primary responsibilities:
- Search for and gather relevant information
- Analyze data and documentation
- Synthesize findings into clear summaries
- Verify facts and cross-reference sources
- Identify trends and patterns in information
Focus on accuracy, thoroughness, and clear communication of findings.""",
allowed_tools={
"read_file",
"list_directory",
"index_directory",
"http_fetch",
"web_search",
"web_search_news",
"db_query",
"db_get",
},
specialization_areas=[
"information_gathering",
"analysis",
"documentation",
"fact_checking",
],
temperature=0.5,
),
"data_analysis": AgentRole(
name="data_analysis",
description="Specialized in data processing and analysis",
system_prompt="""You are a data analysis specialist AI assistant. Your primary responsibilities:
- Process and analyze structured and unstructured data
- Perform statistical analysis and pattern recognition
- Query databases and extract insights
- Create data summaries and reports
- Identify anomalies and trends
Focus on accuracy, data integrity, and actionable insights.""",
allowed_tools={
"db_query",
"db_get",
"db_set",
"read_file",
"write_file",
"python_exec",
"run_command",
"list_directory",
},
specialization_areas=[
"data_processing",
"statistical_analysis",
"database_operations",
],
temperature=0.3,
),
"planning": AgentRole(
name="planning",
description="Specialized in task planning and coordination",
system_prompt="""You are a planning specialist AI assistant. Your primary responsibilities:
- Break down complex tasks into manageable steps
- Create execution plans and workflows
- Identify dependencies and prerequisites
- Estimate effort and resource requirements
- Coordinate between different components
Focus on logical organization, completeness, and feasibility.""",
allowed_tools={
"read_file",
"write_file",
"list_directory",
"index_directory",
"db_set",
"db_get",
},
specialization_areas=["task_decomposition", "workflow_design", "coordination"],
temperature=0.6,
),
"testing": AgentRole(
name="testing",
description="Specialized in testing and quality assurance",
system_prompt="""You are a testing specialist AI assistant. Your primary responsibilities:
- Design and execute test cases
- Identify edge cases and potential failures
- Verify functionality and correctness
- Test error handling and edge conditions
- Ensure code meets quality standards
Focus on thoroughness, coverage, and issue identification.""",
allowed_tools={
"read_file",
"write_file",
"python_exec",
"run_command",
"list_directory",
"db_query",
},
specialization_areas=["test_design", "quality_assurance", "validation"],
temperature=0.4,
),
"documentation": AgentRole(
name="documentation",
description="Specialized in creating and maintaining documentation",
system_prompt="""You are a documentation specialist AI assistant. Your primary responsibilities:
- Write clear, comprehensive documentation
- Create API references and user guides
- Document code with comments and docstrings
- Organize and structure information logically
- Ensure documentation is up-to-date and accurate
Focus on clarity, completeness, and user-friendliness.""",
allowed_tools={
"read_file",
"write_file",
"list_directory",
"index_directory",
"http_fetch",
"web_search",
},
specialization_areas=[
"technical_writing",
"documentation_organization",
"user_guides",
],
temperature=0.6,
),
"orchestrator": AgentRole(
name="orchestrator",
description="Coordinates multiple agents and manages overall execution",
system_prompt="""You are an orchestrator AI assistant. Your primary responsibilities:
- Coordinate multiple specialized agents
- Delegate tasks to appropriate agents
- Integrate results from different agents
- Manage overall workflow execution
- Ensure task completion and quality
Focus on effective delegation, integration, and overall success.""",
allowed_tools={
"read_file",
"write_file",
"list_directory",
"db_set",
"db_get",
"db_query",
},
specialization_areas=[
"agent_coordination",
"task_delegation",
"result_integration",
],
temperature=0.5,
),
"general": AgentRole(
name="general",
description="General purpose agent for miscellaneous tasks",
system_prompt="""You are a general purpose AI assistant. Your responsibilities:
- Handle diverse tasks across multiple domains
- Provide balanced assistance for various needs
- Adapt to different types of requests
- Collaborate with specialized agents when needed
Focus on versatility, helpfulness, and task completion.""",
allowed_tools={
"read_file",
"write_file",
"list_directory",
"create_directory",
"change_directory",
"get_current_directory",
"python_exec",
"run_command",
"run_command_interactive",
"http_fetch",
"web_search",
"web_search_news",
"db_set",
"db_get",
"db_query",
"index_directory",
},
specialization_areas=["general_assistance"],
temperature=0.7,
),
}
def get_agent_role(role_name: str) -> AgentRole:
return AGENT_ROLES.get(role_name, AGENT_ROLES["general"])
def list_agent_roles() -> Dict[str, AgentRole]:
return AGENT_ROLES.copy()
def get_recommended_agent(task_description: str) -> str:
task_lower = task_description.lower()
code_keywords = [
"code",
"implement",
"function",
"class",
"bug",
"debug",
"refactor",
"optimize",
]
research_keywords = [
"search",
"find",
"research",
"information",
"analyze",
"investigate",
]
data_keywords = ["data", "database", "query", "statistics", "analyze", "process"]
planning_keywords = ["plan", "organize", "workflow", "steps", "coordinate"]
testing_keywords = ["test", "verify", "validate", "check", "quality"]
doc_keywords = ["document", "documentation", "explain", "guide", "manual"]
if any(keyword in task_lower for keyword in code_keywords):
return "coding"
elif any(keyword in task_lower for keyword in research_keywords):
return "research"
elif any(keyword in task_lower for keyword in data_keywords):
return "data_analysis"
elif any(keyword in task_lower for keyword in planning_keywords):
return "planning"
elif any(keyword in task_lower for keyword in testing_keywords):
return "testing"
elif any(keyword in task_lower for keyword in doc_keywords):
return "documentation"
else:
return "general"

View File

@ -1,4 +0,0 @@
from pr.autonomous.detection import is_task_complete
from pr.autonomous.mode import process_response_autonomous, run_autonomous_mode
__all__ = ["is_task_complete", "run_autonomous_mode", "process_response_autonomous"]

View File

@ -1,3 +0,0 @@
from pr.commands.handlers import handle_command
__all__ = ["handle_command"]

View File

@ -1,11 +0,0 @@
from pr.core.api import call_api, list_models
from pr.core.assistant import Assistant
from pr.core.context import init_system_message, manage_context_window
__all__ = [
"Assistant",
"call_api",
"list_models",
"init_system_message",
"manage_context_window",
]

View File

@ -1,84 +0,0 @@
import re
from typing import Any, Dict, List
class AdvancedContextManager:
def __init__(self, knowledge_store=None, conversation_memory=None):
self.knowledge_store = knowledge_store
self.conversation_memory = conversation_memory
def adaptive_context_window(
self, messages: List[Dict[str, Any]], task_complexity: str = "medium"
) -> int:
complexity_thresholds = {
"simple": 10,
"medium": 20,
"complex": 35,
"very_complex": 50,
}
base_threshold = complexity_thresholds.get(task_complexity, 20)
message_complexity_score = self._analyze_message_complexity(messages)
if message_complexity_score > 0.7:
adjusted = int(base_threshold * 1.5)
elif message_complexity_score < 0.3:
adjusted = int(base_threshold * 0.7)
else:
adjusted = base_threshold
return max(base_threshold, adjusted)
def _analyze_message_complexity(self, messages: List[Dict[str, Any]]) -> float:
total_length = sum(len(msg.get("content", "")) for msg in messages)
avg_length = total_length / len(messages) if messages else 0
unique_words = set()
for msg in messages:
content = msg.get("content", "")
words = re.findall(r"\b\w+\b", content.lower())
unique_words.update(words)
vocabulary_richness = len(unique_words) / total_length if total_length > 0 else 0
# Simple complexity score based on length and richness
complexity = min(1.0, (avg_length / 100) + vocabulary_richness)
return complexity
def extract_key_sentences(self, text: str, top_k: int = 5) -> List[str]:
if not text.strip():
return []
sentences = re.split(r"(?<=[.!?])\s+", text)
if not sentences:
return []
# Simple scoring based on length and position
scored_sentences = []
for i, sentence in enumerate(sentences):
length_score = min(1.0, len(sentence) / 50)
position_score = 1.0 if i == 0 else 0.8 if i < len(sentences) / 2 else 0.6
score = (length_score + position_score) / 2
scored_sentences.append((sentence, score))
scored_sentences.sort(key=lambda x: x[1], reverse=True)
return [s[0] for s in scored_sentences[:top_k]]
def advanced_summarize_messages(self, messages: List[Dict[str, Any]]) -> str:
all_content = " ".join([msg.get("content", "") for msg in messages])
key_sentences = self.extract_key_sentences(all_content, top_k=3)
summary = " ".join(key_sentences)
return summary if summary else "No content to summarize."
def score_message_relevance(self, message: Dict[str, Any], context: str) -> float:
content = message.get("content", "")
content_words = set(re.findall(r"\b\w+\b", content.lower()))
context_words = set(re.findall(r"\b\w+\b", context.lower()))
intersection = content_words & context_words
union = content_words | context_words
if not union:
return 0.0
return len(intersection) / len(union)

View File

@ -1,95 +0,0 @@
import json
import logging
import urllib.error
import urllib.request
from pr.config import DEFAULT_MAX_TOKENS, DEFAULT_TEMPERATURE
from pr.core.context import auto_slim_messages
logger = logging.getLogger("pr")
def call_api(messages, model, api_url, api_key, use_tools, tools_definition, verbose=False):
try:
messages = auto_slim_messages(messages, verbose=verbose)
logger.debug(f"=== API CALL START ===")
logger.debug(f"Model: {model}")
logger.debug(f"API URL: {api_url}")
logger.debug(f"Use tools: {use_tools}")
logger.debug(f"Message count: {len(messages)}")
headers = {
"Content-Type": "application/json",
}
if api_key:
headers["Authorization"] = f"Bearer {api_key}"
data = {
"model": model,
"messages": messages,
"temperature": DEFAULT_TEMPERATURE,
"max_tokens": DEFAULT_MAX_TOKENS,
}
if "gpt-5" in model:
del data["temperature"]
del data["max_tokens"]
logger.debug("GPT-5 detected: removed temperature and max_tokens")
if use_tools:
data["tools"] = tools_definition
data["tool_choice"] = "auto"
logger.debug(f"Tool calling enabled with {len(tools_definition)} tools")
request_json = json.dumps(data)
logger.debug(f"Request payload size: {len(request_json)} bytes")
req = urllib.request.Request(
api_url, data=request_json.encode("utf-8"), headers=headers, method="POST"
)
logger.debug("Sending HTTP request...")
with urllib.request.urlopen(req) as response:
response_data = response.read().decode("utf-8")
logger.debug(f"Response received: {len(response_data)} bytes")
result = json.loads(response_data)
if "usage" in result:
logger.debug(f"Token usage: {result['usage']}")
if "choices" in result and result["choices"]:
choice = result["choices"][0]
if "message" in choice:
msg = choice["message"]
logger.debug(f"Response role: {msg.get('role', 'N/A')}")
if "content" in msg and msg["content"]:
logger.debug(f"Response content length: {len(msg['content'])} chars")
if "tool_calls" in msg:
logger.debug(f"Response contains {len(msg['tool_calls'])} tool call(s)")
logger.debug("=== API CALL END ===")
return result
except urllib.error.HTTPError as e:
error_body = e.read().decode("utf-8")
logger.error(f"API HTTP Error: {e.code} - {error_body}")
logger.debug("=== API CALL FAILED ===")
return {"error": f"API Error: {e.code}", "message": error_body}
except Exception as e:
logger.error(f"API call failed: {e}")
logger.debug("=== API CALL FAILED ===")
return {"error": str(e)}
def list_models(model_list_url, api_key):
try:
req = urllib.request.Request(model_list_url)
if api_key:
req.add_header("Authorization", f"Bearer {api_key}")
with urllib.request.urlopen(req) as response:
data = json.loads(response.read().decode("utf-8"))
return data.get("data", [])
except Exception as e:
return {"error": str(e)}

View File

@ -1,281 +0,0 @@
import json
import sqlite3
import time
from dataclasses import dataclass
from typing import Any, Dict, List, Optional
from .semantic_index import SemanticIndex
@dataclass
class KnowledgeEntry:
entry_id: str
category: str
content: str
metadata: Dict[str, Any]
created_at: float
updated_at: float
access_count: int = 0
importance_score: float = 1.0
def to_dict(self) -> Dict[str, Any]:
return {
"entry_id": self.entry_id,
"category": self.category,
"content": self.content,
"metadata": self.metadata,
"created_at": self.created_at,
"updated_at": self.updated_at,
"access_count": self.access_count,
"importance_score": self.importance_score,
}
class KnowledgeStore:
def __init__(self, db_path: str):
self.db_path = db_path
self.conn = sqlite3.connect(self.db_path, check_same_thread=False)
self.semantic_index = SemanticIndex()
self._initialize_store()
self._load_index()
def _initialize_store(self):
cursor = self.conn.cursor()
cursor.execute(
"""
CREATE TABLE IF NOT EXISTS knowledge_entries (
entry_id TEXT PRIMARY KEY,
category TEXT NOT NULL,
content TEXT NOT NULL,
metadata TEXT,
created_at REAL NOT NULL,
updated_at REAL NOT NULL,
access_count INTEGER DEFAULT 0,
importance_score REAL DEFAULT 1.0
)
"""
)
cursor.execute(
"""
CREATE INDEX IF NOT EXISTS idx_category ON knowledge_entries(category)
"""
)
cursor.execute(
"""
CREATE INDEX IF NOT EXISTS idx_importance ON knowledge_entries(importance_score DESC)
"""
)
cursor.execute(
"""
CREATE INDEX IF NOT EXISTS idx_created ON knowledge_entries(created_at DESC)
"""
)
self.conn.commit()
def _load_index(self):
cursor = self.conn.cursor()
cursor.execute("SELECT entry_id, content FROM knowledge_entries")
for row in cursor.fetchall():
self.semantic_index.add_document(row[0], row[1])
def add_entry(self, entry: KnowledgeEntry):
cursor = self.conn.cursor()
cursor.execute(
"""
INSERT OR REPLACE INTO knowledge_entries
(entry_id, category, content, metadata, created_at, updated_at, access_count, importance_score)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
""",
(
entry.entry_id,
entry.category,
entry.content,
json.dumps(entry.metadata),
entry.created_at,
entry.updated_at,
entry.access_count,
entry.importance_score,
),
)
self.conn.commit()
self.semantic_index.add_document(entry.entry_id, entry.content)
def get_entry(self, entry_id: str) -> Optional[KnowledgeEntry]:
cursor = self.conn.cursor()
cursor.execute(
"""
SELECT entry_id, category, content, metadata, created_at, updated_at, access_count, importance_score
FROM knowledge_entries
WHERE entry_id = ?
""",
(entry_id,),
)
row = cursor.fetchone()
if row:
cursor.execute(
"""
UPDATE knowledge_entries
SET access_count = access_count + 1
WHERE entry_id = ?
""",
(entry_id,),
)
self.conn.commit()
return KnowledgeEntry(
entry_id=row[0],
category=row[1],
content=row[2],
metadata=json.loads(row[3]) if row[3] else {},
created_at=row[4],
updated_at=row[5],
access_count=row[6] + 1,
importance_score=row[7],
)
return None
def search_entries(
self, query: str, category: Optional[str] = None, top_k: int = 5
) -> List[KnowledgeEntry]:
search_results = self.semantic_index.search(query, top_k * 2)
cursor = self.conn.cursor()
entries = []
for entry_id, score in search_results:
if category:
cursor.execute(
"""
SELECT entry_id, category, content, metadata, created_at, updated_at, access_count, importance_score
FROM knowledge_entries
WHERE entry_id = ? AND category = ?
""",
(entry_id, category),
)
else:
cursor.execute(
"""
SELECT entry_id, category, content, metadata, created_at, updated_at, access_count, importance_score
FROM knowledge_entries
WHERE entry_id = ?
""",
(entry_id,),
)
row = cursor.fetchone()
if row:
entry = KnowledgeEntry(
entry_id=row[0],
category=row[1],
content=row[2],
metadata=json.loads(row[3]) if row[3] else {},
created_at=row[4],
updated_at=row[5],
access_count=row[6],
importance_score=row[7],
)
entries.append(entry)
if len(entries) >= top_k:
break
return entries
def get_by_category(self, category: str, limit: int = 20) -> List[KnowledgeEntry]:
cursor = self.conn.cursor()
cursor.execute(
"""
SELECT entry_id, category, content, metadata, created_at, updated_at, access_count, importance_score
FROM knowledge_entries
WHERE category = ?
ORDER BY importance_score DESC, created_at DESC
LIMIT ?
""",
(category, limit),
)
entries = []
for row in cursor.fetchall():
entries.append(
KnowledgeEntry(
entry_id=row[0],
category=row[1],
content=row[2],
metadata=json.loads(row[3]) if row[3] else {},
created_at=row[4],
updated_at=row[5],
access_count=row[6],
importance_score=row[7],
)
)
return entries
def update_importance(self, entry_id: str, importance_score: float):
cursor = self.conn.cursor()
cursor.execute(
"""
UPDATE knowledge_entries
SET importance_score = ?, updated_at = ?
WHERE entry_id = ?
""",
(importance_score, time.time(), entry_id),
)
self.conn.commit()
def delete_entry(self, entry_id: str) -> bool:
cursor = self.conn.cursor()
cursor.execute("DELETE FROM knowledge_entries WHERE entry_id = ?", (entry_id,))
deleted = cursor.rowcount > 0
self.conn.commit()
if deleted:
self.semantic_index.remove_document(entry_id)
return deleted
def get_statistics(self) -> Dict[str, Any]:
cursor = self.conn.cursor()
cursor.execute("SELECT COUNT(*) FROM knowledge_entries")
total_entries = cursor.fetchone()[0]
cursor.execute("SELECT COUNT(DISTINCT category) FROM knowledge_entries")
total_categories = cursor.fetchone()[0]
cursor.execute(
"""
SELECT category, COUNT(*) as count
FROM knowledge_entries
GROUP BY category
ORDER BY count DESC
"""
)
category_counts = {row[0]: row[1] for row in cursor.fetchall()}
cursor.execute("SELECT SUM(access_count) FROM knowledge_entries")
total_accesses = cursor.fetchone()[0] or 0
return {
"total_entries": total_entries,
"total_categories": total_categories,
"category_distribution": category_counts,
"total_accesses": total_accesses,
"vocabulary_size": len(self.semantic_index.vocabulary),
}

View File

@ -1,596 +0,0 @@
def get_tools_definition():
return [
{
"type": "function",
"function": {
"name": "kill_process",
"description": "Terminate a background process by its PID. Use this to stop processes started with run_command that exceeded their timeout.",
"parameters": {
"type": "object",
"properties": {
"pid": {
"type": "integer",
"description": "The process ID returned by run_command when status is 'running'.",
}
},
"required": ["pid"],
},
},
},
{
"type": "function",
"function": {
"name": "tail_process",
"description": "Monitor and retrieve output from a background process by its PID. Use this to check on processes started with run_command that exceeded their timeout.",
"parameters": {
"type": "object",
"properties": {
"pid": {
"type": "integer",
"description": "The process ID returned by run_command when status is 'running'.",
},
"timeout": {
"type": "integer",
"description": "Maximum seconds to wait for process completion. Returns partial output if still running.",
"default": 30,
},
},
"required": ["pid"],
},
},
},
{
"type": "function",
"function": {
"name": "http_fetch",
"description": "Fetch content from an HTTP URL",
"parameters": {
"type": "object",
"properties": {
"url": {"type": "string", "description": "The URL to fetch"},
"headers": {
"type": "object",
"description": "Optional HTTP headers",
},
},
"required": ["url"],
},
},
},
{
"type": "function",
"function": {
"name": "run_command",
"description": "Execute a shell command and capture output. Returns immediately after timeout with PID if still running. Use tail_process to monitor or kill_process to terminate long-running commands.",
"parameters": {
"type": "object",
"properties": {
"command": {
"type": "string",
"description": "The shell command to execute",
},
"timeout": {
"type": "integer",
"description": "Maximum seconds to wait for completion",
"default": 30,
},
},
"required": ["command"],
},
},
},
{
"type": "function",
"function": {
"name": "start_interactive_session",
"description": "Execute an interactive terminal command that requires user input or displays UI. The command runs in a dedicated session and returns a session name.",
"parameters": {
"type": "object",
"properties": {
"command": {
"type": "string",
"description": "The interactive command to execute (e.g., vim, nano, top)",
}
},
"required": ["command"],
},
},
},
{
"type": "function",
"function": {
"name": "send_input_to_session",
"description": "Send input to an interactive session.",
"parameters": {
"type": "object",
"properties": {
"session_name": {
"type": "string",
"description": "The name of the session",
},
"input_data": {
"type": "string",
"description": "The input to send to the session",
},
},
"required": ["session_name", "input_data"],
},
},
},
{
"type": "function",
"function": {
"name": "read_session_output",
"description": "Read output from an interactive session.",
"parameters": {
"type": "object",
"properties": {
"session_name": {
"type": "string",
"description": "The name of the session",
}
},
"required": ["session_name"],
},
},
},
{
"type": "function",
"function": {
"name": "close_interactive_session",
"description": "Close an interactive session.",
"parameters": {
"type": "object",
"properties": {
"session_name": {
"type": "string",
"description": "The name of the session",
}
},
"required": ["session_name"],
},
},
},
{
"type": "function",
"function": {
"name": "read_file",
"description": "Read contents of a file",
"parameters": {
"type": "object",
"properties": {
"filepath": {
"type": "string",
"description": "Path to the file",
}
},
"required": ["filepath"],
},
},
},
{
"type": "function",
"function": {
"name": "write_file",
"description": "Write content to a file",
"parameters": {
"type": "object",
"properties": {
"filepath": {
"type": "string",
"description": "Path to the file",
},
"content": {
"type": "string",
"description": "Content to write",
},
},
"required": ["filepath", "content"],
},
},
},
{
"type": "function",
"function": {
"name": "list_directory",
"description": "List directory contents",
"parameters": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "Directory path",
"default": ".",
},
"recursive": {
"type": "boolean",
"description": "List recursively",
"default": False,
},
},
},
},
},
{
"type": "function",
"function": {
"name": "mkdir",
"description": "Create a new directory",
"parameters": {
"type": "object",
"properties": {
"path": {
"type": "string",
"description": "Path of the directory to create",
}
},
"required": ["path"],
},
},
},
{
"type": "function",
"function": {
"name": "chdir",
"description": "Change the current working directory",
"parameters": {
"type": "object",
"properties": {"path": {"type": "string", "description": "Path to change to"}},
"required": ["path"],
},
},
},
{
"type": "function",
"function": {
"name": "getpwd",
"description": "Get the current working directory",
"parameters": {"type": "object", "properties": {}},
},
},
{
"type": "function",
"function": {
"name": "db_set",
"description": "Set a key-value pair in the database",
"parameters": {
"type": "object",
"properties": {
"key": {"type": "string", "description": "The key"},
"value": {"type": "string", "description": "The value"},
},
"required": ["key", "value"],
},
},
},
{
"type": "function",
"function": {
"name": "db_get",
"description": "Get a value from the database",
"parameters": {
"type": "object",
"properties": {"key": {"type": "string", "description": "The key"}},
"required": ["key"],
},
},
},
{
"type": "function",
"function": {
"name": "db_query",
"description": "Execute a database query",
"parameters": {
"type": "object",
"properties": {"query": {"type": "string", "description": "SQL query"}},
"required": ["query"],
},
},
},
{
"type": "function",
"function": {
"name": "web_search",
"description": "Perform a web search",
"parameters": {
"type": "object",
"properties": {"query": {"type": "string", "description": "Search query"}},
"required": ["query"],
},
},
},
{
"type": "function",
"function": {
"name": "web_search_news",
"description": "Perform a web search for news",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Search query for news",
}
},
"required": ["query"],
},
},
},
{
"type": "function",
"function": {
"name": "python_exec",
"description": "Execute Python code",
"parameters": {
"type": "object",
"properties": {
"code": {
"type": "string",
"description": "Python code to execute",
}
},
"required": ["code"],
},
},
},
{
"type": "function",
"function": {
"name": "index_source_directory",
"description": "Index directory recursively and read all source files.",
"parameters": {
"type": "object",
"properties": {"path": {"type": "string", "description": "Path to index"}},
"required": ["path"],
},
},
},
{
"type": "function",
"function": {
"name": "search_replace",
"description": "Search and replace text in a file",
"parameters": {
"type": "object",
"properties": {
"filepath": {
"type": "string",
"description": "Path to the file",
},
"old_string": {
"type": "string",
"description": "String to replace",
},
"new_string": {
"type": "string",
"description": "Replacement string",
},
},
"required": ["filepath", "old_string", "new_string"],
},
},
},
{
"type": "function",
"function": {
"name": "apply_patch",
"description": "Apply a patch to a file, especially for source code",
"parameters": {
"type": "object",
"properties": {
"filepath": {
"type": "string",
"description": "Path to the file to patch",
},
"patch_content": {
"type": "string",
"description": "The patch content as a string",
},
},
"required": ["filepath", "patch_content"],
},
},
},
{
"type": "function",
"function": {
"name": "create_diff",
"description": "Create a unified diff between two files",
"parameters": {
"type": "object",
"properties": {
"file1": {
"type": "string",
"description": "Path to the first file",
},
"file2": {
"type": "string",
"description": "Path to the second file",
},
"fromfile": {
"type": "string",
"description": "Label for the first file",
"default": "file1",
},
"tofile": {
"type": "string",
"description": "Label for the second file",
"default": "file2",
},
},
"required": ["file1", "file2"],
},
},
},
{
"type": "function",
"function": {
"name": "open_editor",
"description": "Open the RPEditor for a file",
"parameters": {
"type": "object",
"properties": {
"filepath": {
"type": "string",
"description": "Path to the file",
}
},
"required": ["filepath"],
},
},
},
{
"type": "function",
"function": {
"name": "close_editor",
"description": "Close the RPEditor. Always close files when finished editing.",
"parameters": {
"type": "object",
"properties": {
"filepath": {
"type": "string",
"description": "Path to the file",
}
},
"required": ["filepath"],
},
},
},
{
"type": "function",
"function": {
"name": "editor_insert_text",
"description": "Insert text at cursor position in the editor",
"parameters": {
"type": "object",
"properties": {
"filepath": {
"type": "string",
"description": "Path to the file",
},
"text": {"type": "string", "description": "Text to insert"},
"line": {
"type": "integer",
"description": "Line number (optional)",
},
"col": {
"type": "integer",
"description": "Column number (optional)",
},
},
"required": ["filepath", "text"],
},
},
},
{
"type": "function",
"function": {
"name": "editor_replace_text",
"description": "Replace text in a range",
"parameters": {
"type": "object",
"properties": {
"filepath": {
"type": "string",
"description": "Path to the file",
},
"start_line": {"type": "integer", "description": "Start line"},
"start_col": {"type": "integer", "description": "Start column"},
"end_line": {"type": "integer", "description": "End line"},
"end_col": {"type": "integer", "description": "End column"},
"new_text": {"type": "string", "description": "New text"},
},
"required": [
"filepath",
"start_line",
"start_col",
"end_line",
"end_col",
"new_text",
],
},
},
},
{
"type": "function",
"function": {
"name": "editor_search",
"description": "Search for a pattern in the file",
"parameters": {
"type": "object",
"properties": {
"filepath": {
"type": "string",
"description": "Path to the file",
},
"pattern": {"type": "string", "description": "Regex pattern"},
"start_line": {
"type": "integer",
"description": "Start line",
"default": 0,
},
},
"required": ["filepath", "pattern"],
},
},
},
{
"type": "function",
"function": {
"name": "display_file_diff",
"description": "Display a visual colored diff between two files with syntax highlighting and statistics",
"parameters": {
"type": "object",
"properties": {
"filepath1": {
"type": "string",
"description": "Path to the original file",
},
"filepath2": {
"type": "string",
"description": "Path to the modified file",
},
"format_type": {
"type": "string",
"description": "Display format: 'unified' or 'side-by-side'",
"default": "unified",
},
},
"required": ["filepath1", "filepath2"],
},
},
},
{
"type": "function",
"function": {
"name": "display_edit_summary",
"description": "Display a summary of all edit operations performed during the session",
"parameters": {"type": "object", "properties": {}},
},
},
{
"type": "function",
"function": {
"name": "display_edit_timeline",
"description": "Display a timeline of all edit operations with details",
"parameters": {
"type": "object",
"properties": {
"show_content": {
"type": "boolean",
"description": "Show content previews",
"default": False,
}
},
},
},
},
{
"type": "function",
"function": {
"name": "clear_edit_tracker",
"description": "Clear the edit tracker to start fresh",
"parameters": {"type": "object", "properties": {}},
},
},
]

View File

@ -1,14 +0,0 @@
import contextlib
import traceback
from io import StringIO
def python_exec(code, python_globals):
try:
output = StringIO()
with contextlib.redirect_stdout(output):
exec(code, python_globals)
return {"status": "success", "output": output.getvalue()}
except Exception as e:
return {"status": "error", "error": str(e), "traceback": traceback.format_exc()}

View File

@ -1,11 +0,0 @@
from pr.ui.colors import Colors
from pr.ui.display import display_tool_call, print_autonomous_header
from pr.ui.rendering import highlight_code, render_markdown
__all__ = [
"Colors",
"highlight_code",
"render_markdown",
"display_tool_call",
"print_autonomous_header",
]

View File

@ -1,14 +0,0 @@
class Colors:
RESET = "\033[0m"
BOLD = "\033[1m"
RED = "\033[91m"
GREEN = "\033[92m"
YELLOW = "\033[93m"
BLUE = "\033[94m"
MAGENTA = "\033[95m"
CYAN = "\033[96m"
GRAY = "\033[90m"
WHITE = "\033[97m"
BG_BLUE = "\033[44m"
BG_GREEN = "\033[42m"
BG_RED = "\033[41m"

View File

@ -1,21 +0,0 @@
from pr.ui.colors import Colors
def display_tool_call(tool_name, arguments, status="running", result=None):
if status == "running":
return
args_str = ", ".join([f"{k}={str(v)[:20]}" for k, v in list(arguments.items())[:2]])
line = f"{tool_name}({args_str})"
if len(line) > 80:
line = line[:77] + "..."
print(f"{Colors.GRAY}{line}{Colors.RESET}")
def print_autonomous_header(task):
print(f"{Colors.BOLD}Task:{Colors.RESET} {task}")
print(f"{Colors.GRAY}r will work continuously until the task is complete.{Colors.RESET}")
print(f"{Colors.GRAY}Press Ctrl+C twice to interrupt.{Colors.RESET}\n")
print(f"{Colors.BOLD}{'' * 80}{Colors.RESET}\n")

View File

@ -3,44 +3,48 @@ requires = ["setuptools>=61.0"]
build-backend = "setuptools.build_meta" build-backend = "setuptools.build_meta"
[project] [project]
name = "pr-assistant" name = "rp"
version = "1.0.0" version = "1.14.0"
description = "Professional CLI AI assistant with autonomous execution capabilities" description = "R python edition. The ultimate autonomous AI CLI."
readme = "README.md" readme = "README.md"
requires-python = ">=3.8" requires-python = ">=3.12"
license = {text = "MIT"} license = {text = "MIT"}
keywords = ["ai", "assistant", "cli", "automation", "openrouter", "autonomous"] keywords = ["ai", "assistant", "cli", "automation", "openrouter", "autonomous"]
authors = [ authors = [
{name = "retoor", email = "retoor@example.com"} {name = "retoor", email = "retoor@molodetz.nl"}
]
dependencies = [
"pydantic>=2.12.3",
"prompt_toolkit>=3.0.0",
"requests>=2.31.0",
] ]
classifiers = [ classifiers = [
"Development Status :: 4 - Beta", "Development Status :: 4 - Beta",
"Intended Audience :: Developers", "Intended Audience :: Developers",
"License :: OSI Approved :: MIT License", "License :: OSI Approved :: MIT License",
"Programming Language :: Python :: 3", "Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.13",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.11",
"Programming Language :: Python :: 3.12",
"Topic :: Software Development :: Libraries :: Python Modules", "Topic :: Software Development :: Libraries :: Python Modules",
"Topic :: Scientific/Engineering :: Artificial Intelligence", "Topic :: Scientific/Engineering :: Artificial Intelligence",
] ]
[project.optional-dependencies] [project.optional-dependencies]
dev = [ dev = [
"pytest", "pytest>=8.3.0",
"pytest-cov", "pytest-cov>=7.0.0",
"black", "black>=25.9.0",
"flake8", "flake8>=7.3.0",
"mypy", "mypy>=1.18.2",
"pre-commit", "pre-commit>=4.3.0",
] ]
[project.scripts] [project.scripts]
pr = "pr.__main__:main" rp = "rp.__main__:main"
rp = "pr.__main__:main" rpe = "rp.editor:main"
rpe = "pr.editor:main" rpi = "rp.implode:main"
rpserver = "rp.server:main"
rpcgi = "rp.cgi:main"
rpweb = "rp.web.app:main"
[project.urls] [project.urls]
Homepage = "https://retoor.molodetz.nl/retoor/rp" Homepage = "https://retoor.molodetz.nl/retoor/rp"
@ -50,7 +54,7 @@ Repository = "https://retoor.molodetz.nl/retoor/rp"
[tool.setuptools.packages.find] [tool.setuptools.packages.find]
where = ["."] where = ["."]
include = ["pr*"] include = ["rp*"]
exclude = ["tests*"] exclude = ["tests*"]
[tool.pytest.ini_options] [tool.pytest.ini_options]
@ -58,7 +62,7 @@ testpaths = ["tests"]
python_files = ["test_*.py"] python_files = ["test_*.py"]
python_classes = ["Test*"] python_classes = ["Test*"]
python_functions = ["test_*"] python_functions = ["test_*"]
addopts = "-v --cov=pr --cov-report=term-missing --cov-report=html" addopts = "-v --cov=rp --cov-report=term-missing --cov-report=html"
[tool.black] [tool.black]
line-length = 100 line-length = 100
@ -77,7 +81,7 @@ extend-exclude = '''
''' '''
[tool.mypy] [tool.mypy]
python_version = "3.8" python_version = "3.13"
warn_return_any = true warn_return_any = true
warn_unused_configs = true warn_unused_configs = true
disallow_untyped_defs = false disallow_untyped_defs = false
@ -88,7 +92,7 @@ warn_redundant_casts = true
warn_unused_ignores = true warn_unused_ignores = true
[tool.coverage.run] [tool.coverage.run]
source = ["pr"] source = ["rp"]
omit = ["*/tests/*", "*/__pycache__/*"] omit = ["*/tests/*", "*/__pycache__/*"]
[tool.coverage.report] [tool.coverage.report]
@ -111,5 +115,5 @@ use_parentheses = true
ensure_newline_before_comments = true ensure_newline_before_comments = true
[tool.bandit] [tool.bandit]
exclude_dirs = ["tests", "venv", ".venv"] exclude_dirs = ["tests", "venv", ".venv","__pycache__"]
skips = ["B101"] skips = ["B101"]

7
rp.py
View File

@ -1,8 +1,13 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Trigger build # Trigger build
import sys
import os
from pr.__main__ import main # Add current directory to path to ensure imports work
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from rp.__main__ import main
if __name__ == "__main__": if __name__ == "__main__":
main() main()

View File

@ -1,4 +1,4 @@
from pr.core import Assistant
__version__ = "1.0.0" __version__ = "1.0.0"
from rp.core import Assistant
__all__ = ["Assistant"] __all__ = ["Assistant"]

View File

@ -1,22 +1,30 @@
import argparse import argparse
import sys import sys
from rp import __version__
from pr import __version__ from rp.core import Assistant
from pr.core import Assistant
def main(): def main_def():
import tracemalloc
tracemalloc.start()
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
description="PR Assistant - Professional CLI AI assistant with autonomous execution", description="RP Assistant - Professional CLI AI assistant with visual effects, cost tracking, and autonomous execution",
epilog=""" epilog="""
Examples: Examples:
pr "What is Python?" # Single query rp "What is Python?" # Single query
pr -i # Interactive mode rp -i # Interactive mode
pr -i --model gpt-4 # Use specific model rp -i --model gpt-4 # Use specific model
pr --save-session my-task -i # Save session rp --save-session my-task -i # Save session
pr --load-session my-task # Load session rp --load-session my-task # Load session
pr --list-sessions # List all sessions rp --list-sessions # List all sessions
pr --usage # Show token usage stats rp --usage # Show token usage stats
Features:
Visual progress indicators during AI calls
Real-time cost tracking for each query
Sophisticated CLI with colors and effects
Tool execution with status updates
Commands in interactive mode: Commands in interactive mode:
/auto [task] - Enter autonomous mode /auto [task] - Enter autonomous mode
@ -30,9 +38,8 @@ Commands in interactive mode:
""", """,
formatter_class=argparse.RawDescriptionHelpFormatter, formatter_class=argparse.RawDescriptionHelpFormatter,
) )
parser.add_argument("message", nargs="?", help="Message to send to assistant") parser.add_argument("message", nargs="?", help="Message to send to assistant")
parser.add_argument("--version", action="version", version=f"PR Assistant {__version__}") parser.add_argument("--version", action="version", version=f"RP Assistant {__version__}")
parser.add_argument("-m", "--model", help="AI model to use") parser.add_argument("-m", "--model", help="AI model to use")
parser.add_argument("-u", "--api-url", help="API endpoint URL") parser.add_argument("-u", "--api-url", help="API endpoint URL")
parser.add_argument("--model-list-url", help="Model list endpoint URL") parser.add_argument("--model-list-url", help="Model list endpoint URL")
@ -43,53 +50,39 @@ Commands in interactive mode:
) )
parser.add_argument("--no-syntax", action="store_true", help="Disable syntax highlighting") parser.add_argument("--no-syntax", action="store_true", help="Disable syntax highlighting")
parser.add_argument( parser.add_argument(
"--include-env", "--include-env", action="store_true", help="Include environment variables in context"
action="store_true",
help="Include environment variables in context",
) )
parser.add_argument("-c", "--context", action="append", help="Additional context files") parser.add_argument("-c", "--context", action="append", help="Additional context files")
parser.add_argument( parser.add_argument(
"--api-mode", action="store_true", help="API mode for specialized interaction" "--api-mode", action="store_true", help="API mode for specialized interaction"
) )
parser.add_argument( parser.add_argument(
"--output", "--output", choices=["text", "json", "structured"], default="text", help="Output format"
choices=["text", "json", "structured"],
default="text",
help="Output format",
) )
parser.add_argument("--quiet", action="store_true", help="Minimal output") parser.add_argument("--quiet", action="store_true", help="Minimal output")
parser.add_argument("--save-session", metavar="NAME", help="Save session with given name") parser.add_argument("--save-session", metavar="NAME", help="Save session with given name")
parser.add_argument("--load-session", metavar="NAME", help="Load session with given name") parser.add_argument("--load-session", metavar="NAME", help="Load session with given name")
parser.add_argument("--list-sessions", action="store_true", help="List all saved sessions") parser.add_argument("--list-sessions", action="store_true", help="List all saved sessions")
parser.add_argument("--delete-session", metavar="NAME", help="Delete a saved session") parser.add_argument("--delete-session", metavar="NAME", help="Delete a saved session")
parser.add_argument( parser.add_argument(
"--export-session", "--export-session", nargs=2, metavar=("NAME", "FILE"), help="Export session to file"
nargs=2,
metavar=("NAME", "FILE"),
help="Export session to file",
) )
parser.add_argument("--usage", action="store_true", help="Show token usage statistics") parser.add_argument("--usage", action="store_true", help="Show token usage statistics")
parser.add_argument( parser.add_argument(
"--create-config", action="store_true", help="Create default configuration file" "--create-config", action="store_true", help="Create default configuration file"
) )
parser.add_argument("--plugins", action="store_true", help="List loaded plugins") parser.add_argument("--plugins", action="store_true", help="List loaded plugins")
args = parser.parse_args() args = parser.parse_args()
if args.create_config: if args.create_config:
from pr.core.config_loader import create_default_config from rp.core.config_loader import create_default_config
if create_default_config(): if create_default_config():
print("Configuration file created at ~/.prrc") print("Configuration file created at ~/.prrc")
else: else:
print("Error creating configuration file", file=sys.stderr) print("Error creating configuration file", file=sys.stderr)
return return
if args.list_sessions: if args.list_sessions:
from pr.core.session import SessionManager from rp.core.session import SessionManager
sm = SessionManager() sm = SessionManager()
sessions = sm.list_sessions() sessions = sm.list_sessions()
@ -103,9 +96,8 @@ Commands in interactive mode:
print(f" Messages: {sess['message_count']}") print(f" Messages: {sess['message_count']}")
print() print()
return return
if args.delete_session: if args.delete_session:
from pr.core.session import SessionManager from rp.core.session import SessionManager
sm = SessionManager() sm = SessionManager()
if sm.delete_session(args.delete_session): if sm.delete_session(args.delete_session):
@ -113,9 +105,8 @@ Commands in interactive mode:
else: else:
print(f"Error deleting session '{args.delete_session}'", file=sys.stderr) print(f"Error deleting session '{args.delete_session}'", file=sys.stderr)
return return
if args.export_session: if args.export_session:
from pr.core.session import SessionManager from rp.core.session import SessionManager
sm = SessionManager() sm = SessionManager()
name, output_file = args.export_session name, output_file = args.export_session
@ -124,15 +115,13 @@ Commands in interactive mode:
format_type = "markdown" format_type = "markdown"
elif output_file.endswith(".txt"): elif output_file.endswith(".txt"):
format_type = "txt" format_type = "txt"
if sm.export_session(name, output_file, format_type): if sm.export_session(name, output_file, format_type):
print(f"Session exported to {output_file}") print(f"Session exported to {output_file}")
else: else:
print(f"Error exporting session", file=sys.stderr) print(f"Error exporting session", file=sys.stderr)
return return
if args.usage: if args.usage:
from pr.core.usage_tracker import UsageTracker from rp.core.usage_tracker import UsageTracker
usage = UsageTracker.get_total_usage() usage = UsageTracker.get_total_usage()
print(f"\nTotal Usage Statistics:") print(f"\nTotal Usage Statistics:")
@ -140,9 +129,8 @@ Commands in interactive mode:
print(f" Tokens: {usage['total_tokens']:,}") print(f" Tokens: {usage['total_tokens']:,}")
print(f" Estimated Cost: ${usage['total_cost']:.4f}") print(f" Estimated Cost: ${usage['total_cost']:.4f}")
return return
if args.plugins: if args.plugins:
from pr.plugins.loader import PluginLoader from rp.plugins.loader import PluginLoader
loader = PluginLoader() loader = PluginLoader()
loader.load_plugins() loader.load_plugins()
@ -154,10 +142,13 @@ Commands in interactive mode:
for plugin in plugins: for plugin in plugins:
print(f" - {plugin}") print(f" - {plugin}")
return return
assistant = Assistant(args) assistant = Assistant(args)
assistant.run() assistant.run()
def main():
return main_def()
if __name__ == "__main__": if __name__ == "__main__":
main() main()

View File

@ -46,6 +46,7 @@ class AgentMessage:
class AgentCommunicationBus: class AgentCommunicationBus:
def __init__(self, db_path: str): def __init__(self, db_path: str):
self.db_path = db_path self.db_path = db_path
self.conn = sqlite3.connect(db_path) self.conn = sqlite3.connect(db_path)
@ -54,31 +55,18 @@ class AgentCommunicationBus:
def _create_tables(self): def _create_tables(self):
cursor = self.conn.cursor() cursor = self.conn.cursor()
cursor.execute( cursor.execute(
""" "\n CREATE TABLE IF NOT EXISTS agent_messages (\n message_id TEXT PRIMARY KEY,\n from_agent TEXT,\n to_agent TEXT,\n message_type TEXT,\n content TEXT,\n metadata TEXT,\n timestamp REAL,\n session_id TEXT,\n read INTEGER DEFAULT 0\n )\n "
CREATE TABLE IF NOT EXISTS agent_messages (
message_id TEXT PRIMARY KEY,
from_agent TEXT,
to_agent TEXT,
message_type TEXT,
content TEXT,
metadata TEXT,
timestamp REAL,
session_id TEXT,
read INTEGER DEFAULT 0
)
"""
) )
cursor.execute("PRAGMA table_info(agent_messages)")
columns = [row[1] for row in cursor.fetchall()]
if "read" not in columns:
cursor.execute("ALTER TABLE agent_messages ADD COLUMN read INTEGER DEFAULT 0")
self.conn.commit() self.conn.commit()
def send_message(self, message: AgentMessage, session_id: Optional[str] = None): def send_message(self, message: AgentMessage, session_id: Optional[str] = None):
cursor = self.conn.cursor() cursor = self.conn.cursor()
cursor.execute( cursor.execute(
""" "\n INSERT INTO agent_messages\n (message_id, from_agent, to_agent, message_type, content, metadata, timestamp, session_id)\n VALUES (?, ?, ?, ?, ?, ?, ?, ?)\n ",
INSERT INTO agent_messages
(message_id, from_agent, to_agent, message_type, content, metadata, timestamp, session_id)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
""",
( (
message.message_id, message.message_id,
message.from_agent, message.from_agent,
@ -90,32 +78,20 @@ class AgentCommunicationBus:
session_id, session_id,
), ),
) )
self.conn.commit() self.conn.commit()
def get_messages(self, agent_id: str, unread_only: bool = True) -> List[AgentMessage]: def receive_messages(self, agent_id: str, unread_only: bool = True) -> List[AgentMessage]:
cursor = self.conn.cursor() cursor = self.conn.cursor()
if unread_only: if unread_only:
cursor.execute( cursor.execute(
""" "\n SELECT message_id, from_agent, to_agent, message_type, content, metadata, timestamp\n FROM agent_messages\n WHERE to_agent = ? AND read = 0\n ORDER BY timestamp ASC\n ",
SELECT message_id, from_agent, to_agent, message_type, content, metadata, timestamp
FROM agent_messages
WHERE to_agent = ? AND read = 0
ORDER BY timestamp ASC
""",
(agent_id,), (agent_id,),
) )
else: else:
cursor.execute( cursor.execute(
""" "\n SELECT message_id, from_agent, to_agent, message_type, content, metadata, timestamp\n FROM agent_messages\n WHERE to_agent = ?\n ORDER BY timestamp ASC\n ",
SELECT message_id, from_agent, to_agent, message_type, content, metadata, timestamp
FROM agent_messages
WHERE to_agent = ?
ORDER BY timestamp ASC
""",
(agent_id,), (agent_id,),
) )
messages = [] messages = []
for row in cursor.fetchall(): for row in cursor.fetchall():
messages.append( messages.append(
@ -147,21 +123,12 @@ class AgentCommunicationBus:
def close(self): def close(self):
self.conn.close() self.conn.close()
def receive_messages(self, agent_id: str) -> List[AgentMessage]:
return self.get_messages(agent_id, unread_only=True)
def get_conversation_history(self, agent_a: str, agent_b: str) -> List[AgentMessage]: def get_conversation_history(self, agent_a: str, agent_b: str) -> List[AgentMessage]:
cursor = self.conn.cursor() cursor = self.conn.cursor()
cursor.execute( cursor.execute(
""" "\n SELECT message_id, from_agent, to_agent, message_type, content, metadata, timestamp\n FROM agent_messages\n WHERE (from_agent = ? AND to_agent = ?) OR (from_agent = ? AND to_agent = ?)\n ORDER BY timestamp ASC\n ",
SELECT message_id, from_agent, to_agent, message_type, content, metadata, timestamp
FROM agent_messages
WHERE (from_agent = ? AND to_agent = ?) OR (from_agent = ? AND to_agent = ?)
ORDER BY timestamp ASC
""",
(agent_a, agent_b, agent_b, agent_a), (agent_a, agent_b, agent_b, agent_a),
) )
messages = [] messages = []
for row in cursor.fetchall(): for row in cursor.fetchall():
messages.append( messages.append(

View File

@ -1,9 +1,7 @@
import json
import time import time
import uuid import uuid
from dataclasses import dataclass, field from dataclasses import dataclass, field
from typing import Any, Callable, Dict, List, Optional from typing import Any, Callable, Dict, List, Optional
from ..memory.knowledge_store import KnowledgeStore from ..memory.knowledge_store import KnowledgeStore
from .agent_communication import AgentCommunicationBus, AgentMessage, MessageType from .agent_communication import AgentCommunicationBus, AgentMessage, MessageType
from .agent_roles import AgentRole, get_agent_role from .agent_roles import AgentRole, get_agent_role
@ -31,6 +29,7 @@ class AgentInstance:
class AgentManager: class AgentManager:
def __init__(self, db_path: str, api_caller: Callable): def __init__(self, db_path: str, api_caller: Callable):
self.db_path = db_path self.db_path = db_path
self.api_caller = api_caller self.api_caller = api_caller
@ -42,10 +41,8 @@ class AgentManager:
def create_agent(self, role_name: str, agent_id: Optional[str] = None) -> str: def create_agent(self, role_name: str, agent_id: Optional[str] = None) -> str:
if agent_id is None: if agent_id is None:
agent_id = f"{role_name}_{str(uuid.uuid4())[:8]}" agent_id = f"{role_name}_{str(uuid.uuid4())[:8]}"
role = get_agent_role(role_name) role = get_agent_role(role_name)
agent = AgentInstance(agent_id=agent_id, role=role) agent = AgentInstance(agent_id=agent_id, role=role)
self.active_agents[agent_id] = agent self.active_agents[agent_id] = agent
return agent_id return agent_id
@ -64,14 +61,11 @@ class AgentManager:
agent = self.get_agent(agent_id) agent = self.get_agent(agent_id)
if not agent: if not agent:
return {"error": f"Agent {agent_id} not found"} return {"error": f"Agent {agent_id} not found"}
if context: if context:
agent.context.update(context) agent.context.update(context)
agent.add_message("user", task) agent.add_message("user", task)
knowledge_matches = self.knowledge_store.search_entries(task, top_k=3) knowledge_matches = self.knowledge_store.search_entries(task, top_k=3)
agent.task_count += 1 agent.task_count += 1
messages = agent.get_messages_for_api() messages = agent.get_messages_for_api()
if knowledge_matches: if knowledge_matches:
knowledge_content = "Knowledge base matches based on your query:\\n" knowledge_content = "Knowledge base matches based on your query:\\n"
@ -79,18 +73,15 @@ class AgentManager:
shortened_content = entry.content[:2000] shortened_content = entry.content[:2000]
knowledge_content += f"{i}. {shortened_content}\\n\\n" knowledge_content += f"{i}. {shortened_content}\\n\\n"
messages.insert(-1, {"role": "user", "content": knowledge_content}) messages.insert(-1, {"role": "user", "content": knowledge_content})
try: try:
response = self.api_caller( response = self.api_caller(
messages=messages, messages=messages,
temperature=agent.role.temperature, temperature=agent.role.temperature,
max_tokens=agent.role.max_tokens, max_tokens=agent.role.max_tokens,
) )
if response and "choices" in response: if response and "choices" in response:
assistant_message = response["choices"][0]["message"]["content"] assistant_message = response["choices"][0]["message"]["content"]
agent.add_message("assistant", assistant_message) agent.add_message("assistant", assistant_message)
return { return {
"success": True, "success": True,
"agent_id": agent_id, "agent_id": agent_id,
@ -100,7 +91,6 @@ class AgentManager:
} }
else: else:
return {"error": "Invalid API response", "agent_id": agent_id} return {"error": "Invalid API response", "agent_id": agent_id}
except Exception as e: except Exception as e:
return {"error": str(e), "agent_id": agent_id} return {"error": str(e), "agent_id": agent_id}
@ -121,44 +111,31 @@ class AgentManager:
timestamp=time.time(), timestamp=time.time(),
message_id=str(uuid.uuid4())[:16], message_id=str(uuid.uuid4())[:16],
) )
self.communication_bus.send_message(message, self.session_id) self.communication_bus.send_message(message, self.session_id)
return message.message_id return message.message_id
def get_agent_messages(self, agent_id: str, unread_only: bool = True) -> List[AgentMessage]: def get_agent_messages(self, agent_id: str, unread_only: bool = True) -> List[AgentMessage]:
return self.communication_bus.get_messages(agent_id, unread_only) return self.communication_bus.receive_messages(agent_id, unread_only)
def collaborate_agents(self, orchestrator_id: str, task: str, agent_roles: List[str]): def collaborate_agents(self, orchestrator_id: str, task: str, agent_roles: List[str]):
orchestrator = self.get_agent(orchestrator_id) orchestrator = self.get_agent(orchestrator_id)
if not orchestrator: if not orchestrator:
orchestrator_id = self.create_agent("orchestrator") orchestrator_id = self.create_agent("orchestrator")
orchestrator = self.get_agent(orchestrator_id) orchestrator = self.get_agent(orchestrator_id)
worker_agents = [] worker_agents = []
for role in agent_roles: for role in agent_roles:
agent_id = self.create_agent(role) agent_id = self.create_agent(role)
worker_agents.append({"agent_id": agent_id, "role": role}) worker_agents.append({"agent_id": agent_id, "role": role})
orchestration_prompt = f"Task: {task}\n\nAvailable specialized agents:\n{chr(10).join([f'- {a['agent_id']} ({a['role']})' for a in worker_agents])}\n\nBreak down the task and delegate subtasks to appropriate agents. Coordinate their work and integrate results."
orchestration_prompt = f"""Task: {task}
Available specialized agents:
{chr(10).join([f"- {a['agent_id']} ({a['role']})" for a in worker_agents])}
Break down the task and delegate subtasks to appropriate agents. Coordinate their work and integrate results."""
orchestrator_result = self.execute_agent_task(orchestrator_id, orchestration_prompt) orchestrator_result = self.execute_agent_task(orchestrator_id, orchestration_prompt)
results = {"orchestrator": orchestrator_result, "agents": []} results = {"orchestrator": orchestrator_result, "agents": []}
for agent_info in worker_agents: for agent_info in worker_agents:
agent_id = agent_info["agent_id"] agent_id = agent_info["agent_id"]
messages = self.get_agent_messages(agent_id) messages = self.get_agent_messages(agent_id)
for msg in messages: for msg in messages:
subtask = msg.content subtask = msg.content
result = self.execute_agent_task(agent_id, subtask) result = self.execute_agent_task(agent_id, subtask)
results["agents"].append(result) results["agents"].append(result)
self.send_agent_message( self.send_agent_message(
from_agent_id=agent_id, from_agent_id=agent_id,
to_agent_id=orchestrator_id, to_agent_id=orchestrator_id,
@ -166,10 +143,9 @@ Break down the task and delegate subtasks to appropriate agents. Coordinate thei
message_type=MessageType.RESPONSE, message_type=MessageType.RESPONSE,
) )
self.communication_bus.mark_as_read(msg.message_id) self.communication_bus.mark_as_read(msg.message_id)
return results return results
def get_session_summary(self) -> str: def get_session_summary(self) -> Dict[str, Any]:
summary = { summary = {
"session_id": self.session_id, "session_id": self.session_id,
"active_agents": len(self.active_agents), "active_agents": len(self.active_agents),
@ -183,7 +159,7 @@ Break down the task and delegate subtasks to appropriate agents. Coordinate thei
for agent_id, agent in self.active_agents.items() for agent_id, agent in self.active_agents.items()
], ],
} }
return json.dumps(summary) return summary
def clear_session(self): def clear_session(self):
self.active_agents.clear() self.active_agents.clear()

160
rp/agents/agent_roles.py Normal file
View File

@ -0,0 +1,160 @@
from dataclasses import dataclass
from typing import Dict, List, Set
@dataclass
class AgentRole:
name: str
description: str
system_prompt: str
allowed_tools: Set[str]
specialization_areas: List[str]
temperature: float = 0.7
max_tokens: int = 4096
AGENT_ROLES = {
"coding": AgentRole(
name="coding",
description="Specialized in writing, reviewing, and debugging code",
system_prompt="You are a coding specialist AI assistant. Your primary responsibilities:\n- Write clean, efficient, well-structured code\n- Review code for bugs, security issues, and best practices\n- Refactor and optimize existing code\n- Implement features based on specifications\n- Follow language-specific conventions and patterns\nFocus on code quality, maintainability, and performance.",
allowed_tools={
"read_file",
"write_file",
"list_directory",
"create_directory",
"change_directory",
"get_current_directory",
"python_exec",
"run_command",
"index_directory",
},
specialization_areas=["code_writing", "code_review", "debugging", "refactoring"],
temperature=0.3,
),
"research": AgentRole(
name="research",
description="Specialized in information gathering and analysis",
system_prompt="You are a research specialist AI assistant. Your primary responsibilities:\n- Search for and gather relevant information\n- Analyze data and documentation\n- Synthesize findings into clear summaries\n- Verify facts and cross-reference sources\n- Identify trends and patterns in information\nFocus on accuracy, thoroughness, and clear communication of findings.",
allowed_tools={
"read_file",
"list_directory",
"index_directory",
"http_fetch",
"web_search",
"web_search_news",
"db_query",
"db_get",
},
specialization_areas=[
"information_gathering",
"analysis",
"documentation",
"fact_checking",
],
temperature=0.5,
),
"data_analysis": AgentRole(
name="data_analysis",
description="Specialized in data processing and analysis",
system_prompt="You are a data analysis specialist AI assistant. Your primary responsibilities:\n- Process and analyze structured and unstructured data\n- Perform statistical analysis and pattern recognition\n- Query databases and extract insights\n- Create data summaries and reports\n- Identify anomalies and trends\nFocus on accuracy, data integrity, and actionable insights.",
allowed_tools={
"db_query",
"db_get",
"db_set",
"read_file",
"write_file",
"python_exec",
"run_command",
"list_directory",
},
specialization_areas=["data_processing", "statistical_analysis", "database_operations"],
temperature=0.3,
),
"planning": AgentRole(
name="planning",
description="Specialized in task planning and coordination",
system_prompt="You are a planning specialist AI assistant. Your primary responsibilities:\n- Break down complex tasks into manageable steps\n- Create execution plans and workflows\n- Identify dependencies and prerequisites\n- Estimate effort and resource requirements\n- Coordinate between different components\nFocus on logical organization, completeness, and feasibility.",
allowed_tools={
"read_file",
"write_file",
"list_directory",
"index_directory",
"db_set",
"db_get",
},
specialization_areas=["task_decomposition", "workflow_design", "coordination"],
temperature=0.6,
),
"testing": AgentRole(
name="testing",
description="Specialized in testing and quality assurance",
system_prompt="You are a testing specialist AI assistant. Your primary responsibilities:\n- Design and execute test cases\n- Identify edge cases and potential failures\n- Verify functionality and correctness\n- Test error handling and edge conditions\n- Ensure code meets quality standards\nFocus on thoroughness, coverage, and issue identification.",
allowed_tools={
"read_file",
"write_file",
"python_exec",
"run_command",
"list_directory",
"db_query",
},
specialization_areas=["test_design", "quality_assurance", "validation"],
temperature=0.4,
),
"documentation": AgentRole(
name="documentation",
description="Specialized in creating and maintaining documentation",
system_prompt="You are a documentation specialist AI assistant. Your primary responsibilities:\n- Write clear, comprehensive documentation\n- Create API references and user guides\n- Document code with comments and docstrings\n- Organize and structure information logically\n- Ensure documentation is up-to-date and accurate\nFocus on clarity, completeness, and user-friendliness.",
allowed_tools={
"read_file",
"write_file",
"list_directory",
"index_directory",
"http_fetch",
"web_search",
},
specialization_areas=["technical_writing", "documentation_organization", "user_guides"],
temperature=0.6,
),
"orchestrator": AgentRole(
name="orchestrator",
description="Coordinates multiple agents and manages overall execution",
system_prompt="You are an orchestrator AI assistant. Your primary responsibilities:\n- Coordinate multiple specialized agents\n- Delegate tasks to appropriate agents\n- Integrate results from different agents\n- Manage overall workflow execution\n- Ensure task completion and quality\nFocus on effective delegation, integration, and overall success.",
allowed_tools={"read_file", "write_file", "list_directory", "db_set", "db_get", "db_query"},
specialization_areas=["agent_coordination", "task_delegation", "result_integration"],
temperature=0.5,
),
"general": AgentRole(
name="general",
description="General purpose agent for miscellaneous tasks",
system_prompt="You are a general purpose AI assistant. Your responsibilities:\n- Handle diverse tasks across multiple domains\n- Provide balanced assistance for various needs\n- Adapt to different types of requests\n- Collaborate with specialized agents when needed\nFocus on versatility, helpfulness, and task completion.",
allowed_tools={
"read_file",
"write_file",
"list_directory",
"create_directory",
"change_directory",
"get_current_directory",
"python_exec",
"run_command",
"run_command_interactive",
"http_fetch",
"web_search",
"web_search_news",
"db_set",
"db_get",
"db_query",
"index_directory",
},
specialization_areas=["general_assistance"],
temperature=0.7,
),
}
def get_agent_role(role_name: str) -> AgentRole:
return AGENT_ROLES.get(role_name, AGENT_ROLES["general"])
def list_agent_roles() -> Dict[str, AgentRole]:
return AGENT_ROLES.copy()

View File

@ -0,0 +1,4 @@
from rp.autonomous.detection import is_task_complete
from rp.autonomous.mode import process_response_autonomous, run_autonomous_mode
__all__ = ["is_task_complete", "run_autonomous_mode", "process_response_autonomous"]

View File

@ -1,17 +1,14 @@
from pr.config import MAX_AUTONOMOUS_ITERATIONS from rp.config import MAX_AUTONOMOUS_ITERATIONS
from pr.ui import Colors from rp.ui import Colors
def is_task_complete(response, iteration): def is_task_complete(response, iteration):
if "error" in response: if "error" in response:
return True return True
if "choices" not in response or not response["choices"]: if "choices" not in response or not response["choices"]:
return True return True
message = response["choices"][0]["message"] message = response["choices"][0]["message"]
content = message.get("content", "").lower() content = message.get("content", "").lower()
completion_keywords = [ completion_keywords = [
"task complete", "task complete",
"task is complete", "task is complete",
@ -24,7 +21,6 @@ def is_task_complete(response, iteration):
"setup complete", "setup complete",
"installation complete", "installation complete",
] ]
error_keywords = [ error_keywords = [
"cannot proceed", "cannot proceed",
"unable to continue", "unable to continue",
@ -32,22 +28,16 @@ def is_task_complete(response, iteration):
"cannot complete", "cannot complete",
"impossible to", "impossible to",
] ]
has_tool_calls = "tool_calls" in message and message["tool_calls"] has_tool_calls = "tool_calls" in message and message["tool_calls"]
mentions_completion = any(keyword in content for keyword in completion_keywords) mentions_completion = any((keyword in content for keyword in completion_keywords))
mentions_error = any(keyword in content for keyword in error_keywords) mentions_error = any((keyword in content for keyword in error_keywords))
if mentions_error: if mentions_error:
return True return True
if mentions_completion and (not has_tool_calls):
if mentions_completion and not has_tool_calls:
return True return True
if iteration > 5 and (not has_tool_calls):
if iteration > 5 and not has_tool_calls:
return True return True
if iteration >= MAX_AUTONOMOUS_ITERATIONS: if iteration >= MAX_AUTONOMOUS_ITERATIONS:
print(f"{Colors.YELLOW}⚠ Maximum iterations reached{Colors.RESET}") print(f"{Colors.YELLOW}⚠ Maximum iterations reached{Colors.RESET}")
return True return True
return False return False

View File

@ -1,39 +1,33 @@
import json import json
import logging import logging
import time import time
from rp.autonomous.detection import is_task_complete
from rp.core.api import call_api
from rp.core.context import truncate_tool_result
from rp.tools.base import get_tools_definition
from rp.ui import Colors, display_tool_call
from pr.autonomous.detection import is_task_complete logger = logging.getLogger("rp")
from pr.core.context import truncate_tool_result
from pr.ui import Colors, display_tool_call
logger = logging.getLogger("pr")
def run_autonomous_mode(assistant, task): def run_autonomous_mode(assistant, task):
assistant.autonomous_mode = True assistant.autonomous_mode = True
assistant.autonomous_iterations = 0 assistant.autonomous_iterations = 0
logger.debug(f"=== AUTONOMOUS MODE START ===") logger.debug(f"=== AUTONOMOUS MODE START ===")
logger.debug(f"Task: {task}") logger.debug(f"Task: {task}")
from rp.core.knowledge_context import inject_knowledge_context
inject_knowledge_context(assistant, task)
assistant.messages.append({"role": "user", "content": f"{task}"}) assistant.messages.append({"role": "user", "content": f"{task}"})
try: try:
while True: while True:
assistant.autonomous_iterations += 1 assistant.autonomous_iterations += 1
logger.debug(f"--- Autonomous iteration {assistant.autonomous_iterations} ---") logger.debug(f"--- Autonomous iteration {assistant.autonomous_iterations} ---")
logger.debug(f"Messages before context management: {len(assistant.messages)}") logger.debug(f"Messages before context management: {len(assistant.messages)}")
from rp.core.context import manage_context_window
from pr.core.context import manage_context_window
assistant.messages = manage_context_window(assistant.messages, assistant.verbose) assistant.messages = manage_context_window(assistant.messages, assistant.verbose)
logger.debug(f"Messages after context management: {len(assistant.messages)}") logger.debug(f"Messages after context management: {len(assistant.messages)}")
from pr.core.api import call_api
from pr.tools.base import get_tools_definition
response = call_api( response = call_api(
assistant.messages, assistant.messages,
assistant.model, assistant.model,
@ -43,31 +37,23 @@ def run_autonomous_mode(assistant, task):
get_tools_definition(), get_tools_definition(),
verbose=assistant.verbose, verbose=assistant.verbose,
) )
if "error" in response: if "error" in response:
logger.error(f"API error in autonomous mode: {response['error']}") logger.error(f"API error in autonomous mode: {response['error']}")
print(f"{Colors.RED}Error: {response['error']}{Colors.RESET}") print(f"{Colors.RED}Error: {response['error']}{Colors.RESET}")
break break
is_complete = is_task_complete(response, assistant.autonomous_iterations) is_complete = is_task_complete(response, assistant.autonomous_iterations)
logger.debug(f"Task completion check: {is_complete}") logger.debug(f"Task completion check: {is_complete}")
if is_complete: if is_complete:
result = process_response_autonomous(assistant, response) result = process_response_autonomous(assistant, response)
print(f"\n{Colors.GREEN}r:{Colors.RESET} {result}\n") print(f"\n{Colors.GREEN}r:{Colors.RESET} {result}\n")
logger.debug(f"=== AUTONOMOUS MODE COMPLETE ===") logger.debug(f"=== AUTONOMOUS MODE COMPLETE ===")
logger.debug(f"Total iterations: {assistant.autonomous_iterations}") logger.debug(f"Total iterations: {assistant.autonomous_iterations}")
logger.debug(f"Final message count: {len(assistant.messages)}") logger.debug(f"Final message count: {len(assistant.messages)}")
break break
result = process_response_autonomous(assistant, response) result = process_response_autonomous(assistant, response)
if result: if result:
print(f"\n{Colors.GREEN}r:{Colors.RESET} {result}\n") print(f"\n{Colors.GREEN}r:{Colors.RESET} {result}\n")
time.sleep(0.5) time.sleep(0.5)
except KeyboardInterrupt: except KeyboardInterrupt:
logger.debug("Autonomous mode interrupted by user") logger.debug("Autonomous mode interrupted by user")
print(f"\n{Colors.YELLOW}Autonomous mode interrupted by user{Colors.RESET}") print(f"\n{Colors.YELLOW}Autonomous mode interrupted by user{Colors.RESET}")
@ -79,39 +65,29 @@ def run_autonomous_mode(assistant, task):
def process_response_autonomous(assistant, response): def process_response_autonomous(assistant, response):
if "error" in response: if "error" in response:
return f"Error: {response['error']}" return f"Error: {response['error']}"
if "choices" not in response or not response["choices"]: if "choices" not in response or not response["choices"]:
return "No response from API" return "No response from API"
message = response["choices"][0]["message"] message = response["choices"][0]["message"]
assistant.messages.append(message) assistant.messages.append(message)
if "tool_calls" in message and message["tool_calls"]: if "tool_calls" in message and message["tool_calls"]:
tool_results = [] tool_results = []
for tool_call in message["tool_calls"]: for tool_call in message["tool_calls"]:
func_name = tool_call["function"]["name"] func_name = tool_call["function"]["name"]
arguments = json.loads(tool_call["function"]["arguments"]) arguments = json.loads(tool_call["function"]["arguments"])
result = execute_single_tool(assistant, func_name, arguments) result = execute_single_tool(assistant, func_name, arguments)
result = truncate_tool_result(result) if isinstance(result, str):
try:
result = json.loads(result)
except json.JSONDecodeError as ex:
result = {"error": str(ex)}
status = "success" if result.get("status") == "success" else "error" status = "success" if result.get("status") == "success" else "error"
result = truncate_tool_result(result)
display_tool_call(func_name, arguments, status, result) display_tool_call(func_name, arguments, status, result)
tool_results.append( tool_results.append(
{ {"tool_call_id": tool_call["id"], "role": "tool", "content": json.dumps(result)}
"tool_call_id": tool_call["id"],
"role": "tool",
"content": json.dumps(result),
}
) )
for result in tool_results: for result in tool_results:
assistant.messages.append(result) assistant.messages.append(result)
from pr.core.api import call_api
from pr.tools.base import get_tools_definition
follow_up = call_api( follow_up = call_api(
assistant.messages, assistant.messages,
assistant.model, assistant.model,
@ -122,9 +98,8 @@ def process_response_autonomous(assistant, response):
verbose=assistant.verbose, verbose=assistant.verbose,
) )
return process_response_autonomous(assistant, follow_up) return process_response_autonomous(assistant, follow_up)
content = message.get("content", "") content = message.get("content", "")
from pr.ui import render_markdown from rp.ui import render_markdown
return render_markdown(content, assistant.syntax_highlighting) return render_markdown(content, assistant.syntax_highlighting)
@ -132,8 +107,7 @@ def process_response_autonomous(assistant, response):
def execute_single_tool(assistant, func_name, arguments): def execute_single_tool(assistant, func_name, arguments):
logger.debug(f"Executing tool in autonomous mode: {func_name}") logger.debug(f"Executing tool in autonomous mode: {func_name}")
logger.debug(f"Tool arguments: {arguments}") logger.debug(f"Tool arguments: {arguments}")
from rp.tools import (
from pr.tools import (
apply_patch, apply_patch,
chdir, chdir,
close_editor, close_editor,
@ -141,9 +115,6 @@ def execute_single_tool(assistant, func_name, arguments):
db_get, db_get,
db_query, db_query,
db_set, db_set,
editor_insert_text,
editor_replace_text,
editor_search,
getpwd, getpwd,
http_fetch, http_fetch,
index_source_directory, index_source_directory,
@ -161,12 +132,8 @@ def execute_single_tool(assistant, func_name, arguments):
web_search_news, web_search_news,
write_file, write_file,
) )
from pr.tools.filesystem import ( from rp.tools.filesystem import clear_edit_tracker, display_edit_summary, display_edit_timeline
clear_edit_tracker, from rp.tools.patch import display_file_diff
display_edit_summary,
display_edit_timeline,
)
from pr.tools.patch import display_file_diff
func_map = { func_map = {
"http_fetch": lambda **kw: http_fetch(**kw), "http_fetch": lambda **kw: http_fetch(**kw),
@ -200,7 +167,6 @@ def execute_single_tool(assistant, func_name, arguments):
"display_edit_timeline": lambda **kw: display_edit_timeline(**kw), "display_edit_timeline": lambda **kw: display_edit_timeline(**kw),
"clear_edit_tracker": lambda **kw: clear_edit_tracker(), "clear_edit_tracker": lambda **kw: clear_edit_tracker(),
} }
if func_name in func_map: if func_name in func_map:
try: try:
result = func_map[func_name](**arguments) result = func_map[func_name](**arguments)

View File

@ -6,6 +6,7 @@ from typing import Any, Dict, Optional
class APICache: class APICache:
def __init__(self, db_path: str, ttl_seconds: int = 3600): def __init__(self, db_path: str, ttl_seconds: int = 3600):
self.db_path = db_path self.db_path = db_path
self.ttl_seconds = ttl_seconds self.ttl_seconds = ttl_seconds
@ -15,23 +16,16 @@ class APICache:
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute( cursor.execute(
""" "\n CREATE TABLE IF NOT EXISTS api_cache (\n cache_key TEXT PRIMARY KEY,\n response_data TEXT NOT NULL,\n created_at INTEGER NOT NULL,\n expires_at INTEGER NOT NULL,\n model TEXT,\n token_count INTEGER,\n hit_count INTEGER DEFAULT 0\n )\n "
CREATE TABLE IF NOT EXISTS api_cache (
cache_key TEXT PRIMARY KEY,
response_data TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
model TEXT,
token_count INTEGER
)
"""
) )
cursor.execute( cursor.execute(
""" "\n CREATE INDEX IF NOT EXISTS idx_expires_at ON api_cache(expires_at)\n "
CREATE INDEX IF NOT EXISTS idx_expires_at ON api_cache(expires_at)
"""
) )
conn.commit() cursor.execute("PRAGMA table_info(api_cache)")
columns = [row[1] for row in cursor.fetchall()]
if "hit_count" not in columns:
cursor.execute("ALTER TABLE api_cache ADD COLUMN hit_count INTEGER DEFAULT 0")
conn.commit()
conn.close() conn.close()
def _generate_cache_key( def _generate_cache_key(
@ -50,24 +44,23 @@ class APICache:
self, model: str, messages: list, temperature: float, max_tokens: int self, model: str, messages: list, temperature: float, max_tokens: int
) -> Optional[Dict[str, Any]]: ) -> Optional[Dict[str, Any]]:
cache_key = self._generate_cache_key(model, messages, temperature, max_tokens) cache_key = self._generate_cache_key(model, messages, temperature, max_tokens)
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
current_time = int(time.time()) current_time = int(time.time())
cursor.execute( cursor.execute(
""" "\n SELECT response_data FROM api_cache\n WHERE cache_key = ? AND expires_at > ?\n ",
SELECT response_data FROM api_cache
WHERE cache_key = ? AND expires_at > ?
""",
(cache_key, current_time), (cache_key, current_time),
) )
row = cursor.fetchone() row = cursor.fetchone()
conn.close()
if row: if row:
cursor.execute(
"\n UPDATE api_cache SET hit_count = hit_count + 1\n WHERE cache_key = ?\n ",
(cache_key,),
)
conn.commit()
conn.close()
return json.loads(row[0]) return json.loads(row[0])
conn.close()
return None return None
def set( def set(
@ -80,80 +73,55 @@ class APICache:
token_count: int = 0, token_count: int = 0,
): ):
cache_key = self._generate_cache_key(model, messages, temperature, max_tokens) cache_key = self._generate_cache_key(model, messages, temperature, max_tokens)
current_time = int(time.time()) current_time = int(time.time())
expires_at = current_time + self.ttl_seconds expires_at = current_time + self.ttl_seconds
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute( cursor.execute(
""" "\n INSERT OR REPLACE INTO api_cache\n (cache_key, response_data, created_at, expires_at, model, token_count, hit_count)\n VALUES (?, ?, ?, ?, ?, ?, 0)\n ",
INSERT OR REPLACE INTO api_cache (cache_key, json.dumps(response), current_time, expires_at, model, token_count),
(cache_key, response_data, created_at, expires_at, model, token_count)
VALUES (?, ?, ?, ?, ?, ?)
""",
(
cache_key,
json.dumps(response),
current_time,
expires_at,
model,
token_count,
),
) )
conn.commit() conn.commit()
conn.close() conn.close()
def clear_expired(self): def clear_expired(self):
current_time = int(time.time()) current_time = int(time.time())
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute("DELETE FROM api_cache WHERE expires_at <= ?", (current_time,)) cursor.execute("DELETE FROM api_cache WHERE expires_at <= ?", (current_time,))
deleted_count = cursor.rowcount deleted_count = cursor.rowcount
conn.commit() conn.commit()
conn.close() conn.close()
return deleted_count return deleted_count
def clear_all(self): def clear_all(self):
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute("DELETE FROM api_cache") cursor.execute("DELETE FROM api_cache")
deleted_count = cursor.rowcount deleted_count = cursor.rowcount
conn.commit() conn.commit()
conn.close() conn.close()
return deleted_count return deleted_count
def get_statistics(self) -> Dict[str, Any]: def get_statistics(self) -> Dict[str, Any]:
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute("SELECT COUNT(*) FROM api_cache") cursor.execute("SELECT COUNT(*) FROM api_cache")
total_entries = cursor.fetchone()[0] total_entries = cursor.fetchone()[0]
current_time = int(time.time()) current_time = int(time.time())
cursor.execute("SELECT COUNT(*) FROM api_cache WHERE expires_at > ?", (current_time,)) cursor.execute("SELECT COUNT(*) FROM api_cache WHERE expires_at > ?", (current_time,))
valid_entries = cursor.fetchone()[0] valid_entries = cursor.fetchone()[0]
cursor.execute( cursor.execute(
"SELECT SUM(token_count) FROM api_cache WHERE expires_at > ?", "SELECT SUM(token_count) FROM api_cache WHERE expires_at > ?", (current_time,)
(current_time,),
) )
total_tokens = cursor.fetchone()[0] or 0 total_tokens = cursor.fetchone()[0] or 0
cursor.execute("SELECT SUM(hit_count) FROM api_cache WHERE expires_at > ?", (current_time,))
total_hits = cursor.fetchone()[0] or 0
conn.close() conn.close()
return { return {
"total_entries": total_entries, "total_entries": total_entries,
"valid_entries": valid_entries, "valid_entries": valid_entries,
"expired_entries": total_entries - valid_entries, "expired_entries": total_entries - valid_entries,
"total_cached_tokens": total_tokens, "total_cached_tokens": total_tokens,
"total_cache_hits": total_hits,
} }

View File

@ -13,6 +13,13 @@ class ToolCache:
"db_get", "db_get",
"db_query", "db_query",
"index_directory", "index_directory",
"http_fetch",
"web_search",
"web_search_news",
"search_knowledge",
"get_knowledge_entry",
"get_knowledge_by_category",
"get_knowledge_statistics",
} }
def __init__(self, db_path: str, ttl_seconds: int = 300): def __init__(self, db_path: str, ttl_seconds: int = 300):
@ -24,26 +31,13 @@ class ToolCache:
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute( cursor.execute(
""" "\n CREATE TABLE IF NOT EXISTS tool_cache (\n cache_key TEXT PRIMARY KEY,\n tool_name TEXT NOT NULL,\n result_data TEXT NOT NULL,\n created_at INTEGER NOT NULL,\n expires_at INTEGER NOT NULL,\n hit_count INTEGER DEFAULT 0\n )\n "
CREATE TABLE IF NOT EXISTS tool_cache (
cache_key TEXT PRIMARY KEY,
tool_name TEXT NOT NULL,
result_data TEXT NOT NULL,
created_at INTEGER NOT NULL,
expires_at INTEGER NOT NULL,
hit_count INTEGER DEFAULT 0
)
"""
) )
cursor.execute( cursor.execute(
""" "\n CREATE INDEX IF NOT EXISTS idx_tool_expires ON tool_cache(expires_at)\n "
CREATE INDEX IF NOT EXISTS idx_tool_expires ON tool_cache(expires_at)
"""
) )
cursor.execute( cursor.execute(
""" "\n CREATE INDEX IF NOT EXISTS idx_tool_name ON tool_cache(tool_name)\n "
CREATE INDEX IF NOT EXISTS idx_tool_name ON tool_cache(tool_name)
"""
) )
conn.commit() conn.commit()
conn.close() conn.close()
@ -59,133 +53,80 @@ class ToolCache:
def get(self, tool_name: str, arguments: dict) -> Optional[Any]: def get(self, tool_name: str, arguments: dict) -> Optional[Any]:
if not self.is_cacheable(tool_name): if not self.is_cacheable(tool_name):
return None return None
cache_key = self._generate_cache_key(tool_name, arguments) cache_key = self._generate_cache_key(tool_name, arguments)
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
current_time = int(time.time()) current_time = int(time.time())
cursor.execute( cursor.execute(
""" "\n SELECT result_data, hit_count FROM tool_cache\n WHERE cache_key = ? AND expires_at > ?\n ",
SELECT result_data, hit_count FROM tool_cache
WHERE cache_key = ? AND expires_at > ?
""",
(cache_key, current_time), (cache_key, current_time),
) )
row = cursor.fetchone() row = cursor.fetchone()
if row: if row:
cursor.execute( cursor.execute(
""" "\n UPDATE tool_cache SET hit_count = hit_count + 1\n WHERE cache_key = ?\n ",
UPDATE tool_cache SET hit_count = hit_count + 1
WHERE cache_key = ?
""",
(cache_key,), (cache_key,),
) )
conn.commit() conn.commit()
conn.close() conn.close()
return json.loads(row[0]) return json.loads(row[0])
conn.close() conn.close()
return None return None
def set(self, tool_name: str, arguments: dict, result: Any): def set(self, tool_name: str, arguments: dict, result: Any):
if not self.is_cacheable(tool_name): if not self.is_cacheable(tool_name):
return return
cache_key = self._generate_cache_key(tool_name, arguments) cache_key = self._generate_cache_key(tool_name, arguments)
current_time = int(time.time()) current_time = int(time.time())
expires_at = current_time + self.ttl_seconds expires_at = current_time + self.ttl_seconds
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute( cursor.execute(
""" "\n INSERT OR REPLACE INTO tool_cache\n (cache_key, tool_name, result_data, created_at, expires_at, hit_count)\n VALUES (?, ?, ?, ?, ?, 0)\n ",
INSERT OR REPLACE INTO tool_cache
(cache_key, tool_name, result_data, created_at, expires_at, hit_count)
VALUES (?, ?, ?, ?, ?, 0)
""",
(cache_key, tool_name, json.dumps(result), current_time, expires_at), (cache_key, tool_name, json.dumps(result), current_time, expires_at),
) )
conn.commit() conn.commit()
conn.close() conn.close()
def invalidate_tool(self, tool_name: str):
conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor()
cursor.execute("DELETE FROM tool_cache WHERE tool_name = ?", (tool_name,))
deleted_count = cursor.rowcount
conn.commit()
conn.close()
return deleted_count
def clear_expired(self): def clear_expired(self):
current_time = int(time.time()) current_time = int(time.time())
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute("DELETE FROM tool_cache WHERE expires_at <= ?", (current_time,)) cursor.execute("DELETE FROM tool_cache WHERE expires_at <= ?", (current_time,))
deleted_count = cursor.rowcount deleted_count = cursor.rowcount
conn.commit() conn.commit()
conn.close() conn.close()
return deleted_count return deleted_count
def clear_all(self): def clear_all(self):
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute("DELETE FROM tool_cache") cursor.execute("DELETE FROM tool_cache")
deleted_count = cursor.rowcount deleted_count = cursor.rowcount
conn.commit() conn.commit()
conn.close() conn.close()
return deleted_count return deleted_count
def get_statistics(self) -> dict: def get_statistics(self) -> dict:
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute("SELECT COUNT(*) FROM tool_cache") cursor.execute("SELECT COUNT(*) FROM tool_cache")
total_entries = cursor.fetchone()[0] total_entries = cursor.fetchone()[0]
current_time = int(time.time()) current_time = int(time.time())
cursor.execute("SELECT COUNT(*) FROM tool_cache WHERE expires_at > ?", (current_time,)) cursor.execute("SELECT COUNT(*) FROM tool_cache WHERE expires_at > ?", (current_time,))
valid_entries = cursor.fetchone()[0] valid_entries = cursor.fetchone()[0]
cursor.execute( cursor.execute(
"SELECT SUM(hit_count) FROM tool_cache WHERE expires_at > ?", "SELECT SUM(hit_count) FROM tool_cache WHERE expires_at > ?", (current_time,)
(current_time,),
) )
total_hits = cursor.fetchone()[0] or 0 total_hits = cursor.fetchone()[0] or 0
cursor.execute( cursor.execute(
""" "\n SELECT tool_name, COUNT(*), SUM(hit_count)\n FROM tool_cache\n WHERE expires_at > ?\n GROUP BY tool_name\n ",
SELECT tool_name, COUNT(*), SUM(hit_count)
FROM tool_cache
WHERE expires_at > ?
GROUP BY tool_name
""",
(current_time,), (current_time,),
) )
tool_stats = {} tool_stats = {}
for row in cursor.fetchall(): for row in cursor.fetchall():
tool_stats[row[0]] = {"cached_entries": row[1], "total_hits": row[2] or 0} tool_stats[row[0]] = {"cached_entries": row[1], "total_hits": row[2] or 0}
conn.close() conn.close()
return { return {
"total_entries": total_entries, "total_entries": total_entries,
"valid_entries": valid_entries, "valid_entries": valid_entries,

3
rp/commands/__init__.py Normal file
View File

@ -0,0 +1,3 @@
from rp.commands.handlers import handle_command
__all__ = ["handle_command"]

View File

@ -1,87 +1,98 @@
import json import json
import time import time
from rp.commands.multiplexer_commands import MULTIPLEXER_COMMANDS
from pr.autonomous import run_autonomous_mode from rp.autonomous import run_autonomous_mode
from pr.core.api import list_models from rp.core.api import list_models
from pr.tools import read_file from rp.tools import read_file
from pr.tools.base import get_tools_definition from rp.tools.base import get_tools_definition
from pr.ui import Colors from rp.ui import Colors
from rp.editor import RPEditor
def handle_command(assistant, command): def handle_command(assistant, command):
command_parts = command.strip().split(maxsplit=1) command_parts = command.strip().split(maxsplit=1)
cmd = command_parts[0].lower() cmd = command_parts[0].lower()
if cmd in MULTIPLEXER_COMMANDS:
return MULTIPLEXER_COMMANDS[cmd](
assistant, command_parts[1:] if len(command_parts) > 1 else []
)
if cmd == "/edit":
rp_editor = RPEditor(command_parts[1] if len(command_parts) > 1 else None)
rp_editor.start()
rp_editor.thread.join()
task = str(rp_editor.get_text())
rp_editor.stop()
rp_editor = None
if task:
run_autonomous_mode(assistant, task)
elif cmd == "/prompt":
rp_editor = RPEditor(command_parts[1] if len(command_parts) > 1 else None)
rp_editor.start()
rp_editor.thread.join()
prompt_text = str(rp_editor.get_text())
rp_editor.stop()
rp_editor = None
if prompt_text.strip():
from rp.core.assistant import process_message
if cmd == "/auto": process_message(assistant, prompt_text)
elif cmd == "/auto":
if len(command_parts) < 2: if len(command_parts) < 2:
print(f"{Colors.RED}Usage: /auto [task description]{Colors.RESET}") print(f"{Colors.RED}Usage: /auto [task description]{Colors.RESET}")
print( print(
f"{Colors.GRAY}Example: /auto Create a Python web scraper for news sites{Colors.RESET}" f"{Colors.GRAY}Example: /auto Create a Python web scraper for news sites{Colors.RESET}"
) )
return True return True
task = command_parts[1] task = command_parts[1]
run_autonomous_mode(assistant, task) run_autonomous_mode(assistant, task)
return True return True
if cmd in ["exit", "quit", "q"]: if cmd in ["exit", "quit", "q"]:
return False return False
elif cmd == "/help" or cmd == "help":
elif cmd == "help": from rp.commands.help_docs import (
print( get_agent_help,
f""" get_background_help,
{Colors.BOLD}Available Commands:{Colors.RESET} get_cache_help,
get_full_help,
{Colors.BOLD}Basic:{Colors.RESET} get_knowledge_help,
exit, quit, q - Exit the assistant get_workflow_help,
/help - Show this help message
/reset - Clear message history
/dump - Show message history as JSON
/verbose - Toggle verbose mode
/models - List available models
/tools - List available tools
{Colors.BOLD}File Operations:{Colors.RESET}
/review <file> - Review a file
/refactor <file> - Refactor code in a file
/obfuscate <file> - Obfuscate code in a file
{Colors.BOLD}Advanced Features:{Colors.RESET}
{Colors.CYAN}/auto <task>{Colors.RESET} - Enter autonomous mode
{Colors.CYAN}/workflow <name>{Colors.RESET} - Execute a workflow
{Colors.CYAN}/workflows{Colors.RESET} - List all workflows
{Colors.CYAN}/agent <role> <task>{Colors.RESET} - Create specialized agent and assign task
{Colors.CYAN}/agents{Colors.RESET} - Show active agents
{Colors.CYAN}/collaborate <task>{Colors.RESET} - Use multiple agents to collaborate
{Colors.CYAN}/knowledge <query>{Colors.RESET} - Search knowledge base
{Colors.CYAN}/remember <content>{Colors.RESET} - Store information in knowledge base
{Colors.CYAN}/history{Colors.RESET} - Show conversation history
{Colors.CYAN}/cache{Colors.RESET} - Show cache statistics
{Colors.CYAN}/cache clear{Colors.RESET} - Clear all caches
{Colors.CYAN}/stats{Colors.RESET} - Show system statistics
"""
) )
if len(command_parts) > 1:
topic = command_parts[1].lower()
if topic == "workflows":
print(get_workflow_help())
elif topic == "agents":
print(get_agent_help())
elif topic == "knowledge":
print(get_knowledge_help())
elif topic == "cache":
print(get_cache_help())
elif topic == "background":
print(get_background_help())
else:
print(f"{Colors.RED}Unknown help topic: {topic}{Colors.RESET}")
print(
f"{Colors.GRAY}Available topics: workflows, agents, knowledge, cache, background{Colors.RESET}"
)
else:
print(get_full_help())
elif cmd == "/reset": elif cmd == "/reset":
assistant.messages = assistant.messages[:1] assistant.messages = assistant.messages[:1]
print(f"{Colors.GREEN}Message history cleared{Colors.RESET}") print(f"{Colors.GREEN}Message history cleared{Colors.RESET}")
elif cmd == "/dump": elif cmd == "/dump":
print(json.dumps(assistant.messages, indent=2)) print(json.dumps(assistant.messages, indent=2))
elif cmd == "/verbose": elif cmd == "/verbose":
assistant.verbose = not assistant.verbose assistant.verbose = not assistant.verbose
print( print(
f"Verbose mode: {Colors.GREEN if assistant.verbose else Colors.RED}{'ON' if assistant.verbose else 'OFF'}{Colors.RESET}" f"Verbose mode: {(Colors.GREEN if assistant.verbose else Colors.RED)}{('ON' if assistant.verbose else 'OFF')}{Colors.RESET}"
) )
elif cmd == "/model":
elif cmd.startswith("/model"):
if len(command_parts) < 2: if len(command_parts) < 2:
print("Current model: " + Colors.GREEN + assistant.model + Colors.RESET) print("Current model: " + Colors.GREEN + assistant.model + Colors.RESET)
else: else:
assistant.model = command_parts[1] assistant.model = command_parts[1]
print(f"Model set to: {Colors.GREEN}{assistant.model}{Colors.RESET}") print(f"Model set to: {Colors.GREEN}{assistant.model}{Colors.RESET}")
elif cmd == "/models": elif cmd == "/models":
models = list_models(assistant.model_list_url, assistant.api_key) models = list_models(assistant.model_list_url, assistant.api_key)
if isinstance(models, dict) and "error" in models: if isinstance(models, dict) and "error" in models:
@ -90,76 +101,65 @@ def handle_command(assistant, command):
print(f"{Colors.BOLD}Available Models:{Colors.RESET}") print(f"{Colors.BOLD}Available Models:{Colors.RESET}")
for model in models: for model in models:
print(f"{Colors.CYAN}{model['id']}{Colors.RESET}") print(f"{Colors.CYAN}{model['id']}{Colors.RESET}")
elif cmd == "/tools": elif cmd == "/tools":
print(f"{Colors.BOLD}Available Tools:{Colors.RESET}") print(f"{Colors.BOLD}Available Tools:{Colors.RESET}")
for tool in get_tools_definition(): for tool in get_tools_definition():
func = tool["function"] func = tool["function"]
print(f"{Colors.CYAN}{func['name']}{Colors.RESET}: {func['description']}") print(f"{Colors.CYAN}{func['name']}{Colors.RESET}: {func['description']}")
elif cmd == "/review" and len(command_parts) > 1: elif cmd == "/review" and len(command_parts) > 1:
filename = command_parts[1] filename = command_parts[1]
review_file(assistant, filename) review_file(assistant, filename)
elif cmd == "/refactor" and len(command_parts) > 1: elif cmd == "/refactor" and len(command_parts) > 1:
filename = command_parts[1] filename = command_parts[1]
refactor_file(assistant, filename) refactor_file(assistant, filename)
elif cmd == "/obfuscate" and len(command_parts) > 1: elif cmd == "/obfuscate" and len(command_parts) > 1:
filename = command_parts[1] filename = command_parts[1]
obfuscate_file(assistant, filename) obfuscate_file(assistant, filename)
elif cmd == "/workflows": elif cmd == "/workflows":
show_workflows(assistant) show_workflows(assistant)
elif cmd == "/workflow" and len(command_parts) > 1: elif cmd == "/workflow" and len(command_parts) > 1:
workflow_name = command_parts[1] workflow_name = command_parts[1]
execute_workflow_command(assistant, workflow_name) execute_workflow_command(assistant, workflow_name)
elif cmd == "/agent":
elif cmd == "/agent" and len(command_parts) > 1: if len(command_parts) < 2:
print(f"{Colors.RED}Usage: /agent <role> <task>{Colors.RESET}")
print(
f"{Colors.GRAY}Available roles: coding, research, data_analysis, planning, testing, documentation{Colors.RESET}"
)
return True
args = command_parts[1].split(maxsplit=1) args = command_parts[1].split(maxsplit=1)
if len(args) < 2: if len(args) < 2:
print(f"{Colors.RED}Usage: /agent <role> <task>{Colors.RESET}") print(f"{Colors.RED}Usage: /agent <role> <task>{Colors.RESET}")
print( print(
f"{Colors.GRAY}Available roles: coding, research, data_analysis, planning, testing, documentation{Colors.RESET}" f"{Colors.GRAY}Available roles: coding, research, data_analysis, planning, testing, documentation{Colors.RESET}"
) )
else: return True
role, task = args[0], args[1] role, task = (args[0], args[1])
execute_agent_task(assistant, role, task) execute_agent_task(assistant, role, task)
elif cmd == "/agents": elif cmd == "/agents":
show_agents(assistant) show_agents(assistant)
elif cmd == "/collaborate" and len(command_parts) > 1: elif cmd == "/collaborate" and len(command_parts) > 1:
task = command_parts[1] task = command_parts[1]
collaborate_agents_command(assistant, task) collaborate_agents_command(assistant, task)
elif cmd == "/knowledge" and len(command_parts) > 1: elif cmd == "/knowledge" and len(command_parts) > 1:
query = command_parts[1] query = command_parts[1]
search_knowledge(assistant, query) search_knowledge(assistant, query)
elif cmd == "/remember" and len(command_parts) > 1: elif cmd == "/remember" and len(command_parts) > 1:
content = command_parts[1] content = command_parts[1]
store_knowledge(assistant, content) store_knowledge(assistant, content)
elif cmd == "/history": elif cmd == "/history":
show_conversation_history(assistant) show_conversation_history(assistant)
elif cmd == "/cache": elif cmd == "/cache":
if len(command_parts) > 1 and command_parts[1].lower() == "clear": if len(command_parts) > 1 and command_parts[1].lower() == "clear":
clear_caches(assistant) clear_caches(assistant)
else: else:
show_cache_stats(assistant) show_cache_stats(assistant)
elif cmd == "/stats": elif cmd == "/stats":
show_system_stats(assistant) show_system_stats(assistant)
elif cmd.startswith("/bg"): elif cmd.startswith("/bg"):
handle_background_command(assistant, command) handle_background_command(assistant, command)
else: else:
return None return None
return True return True
@ -167,7 +167,7 @@ def review_file(assistant, filename):
result = read_file(filename) result = read_file(filename)
if result["status"] == "success": if result["status"] == "success":
message = f"Please review this file and provide feedback:\n\n{result['content']}" message = f"Please review this file and provide feedback:\n\n{result['content']}"
from pr.core.assistant import process_message from rp.core.assistant import process_message
process_message(assistant, message) process_message(assistant, message)
else: else:
@ -178,7 +178,7 @@ def refactor_file(assistant, filename):
result = read_file(filename) result = read_file(filename)
if result["status"] == "success": if result["status"] == "success":
message = f"Please refactor this code to improve its quality:\n\n{result['content']}" message = f"Please refactor this code to improve its quality:\n\n{result['content']}"
from pr.core.assistant import process_message from rp.core.assistant import process_message
process_message(assistant, message) process_message(assistant, message)
else: else:
@ -189,7 +189,7 @@ def obfuscate_file(assistant, filename):
result = read_file(filename) result = read_file(filename)
if result["status"] == "success": if result["status"] == "success":
message = f"Please obfuscate this code:\n\n{result['content']}" message = f"Please obfuscate this code:\n\n{result['content']}"
from pr.core.assistant import process_message from rp.core.assistant import process_message
process_message(assistant, message) process_message(assistant, message)
else: else:
@ -200,12 +200,10 @@ def show_workflows(assistant):
if not hasattr(assistant, "enhanced"): if not hasattr(assistant, "enhanced"):
print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}") print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}")
return return
workflows = assistant.enhanced.get_workflow_list() workflows = assistant.enhanced.get_workflow_list()
if not workflows: if not workflows:
print(f"{Colors.YELLOW}No workflows found{Colors.RESET}") print(f"{Colors.YELLOW}No workflows found{Colors.RESET}")
return return
print(f"\n{Colors.BOLD}Available Workflows:{Colors.RESET}") print(f"\n{Colors.BOLD}Available Workflows:{Colors.RESET}")
for wf in workflows: for wf in workflows:
print(f"{Colors.CYAN}{wf['name']}{Colors.RESET}: {wf['description']}") print(f"{Colors.CYAN}{wf['name']}{Colors.RESET}: {wf['description']}")
@ -216,10 +214,8 @@ def execute_workflow_command(assistant, workflow_name):
if not hasattr(assistant, "enhanced"): if not hasattr(assistant, "enhanced"):
print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}") print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}")
return return
print(f"{Colors.YELLOW}Executing workflow: {workflow_name}...{Colors.RESET}") print(f"{Colors.YELLOW}Executing workflow: {workflow_name}...{Colors.RESET}")
result = assistant.enhanced.execute_workflow(workflow_name) result = assistant.enhanced.execute_workflow(workflow_name)
if "error" in result: if "error" in result:
print(f"{Colors.RED}Error: {result['error']}{Colors.RESET}") print(f"{Colors.RED}Error: {result['error']}{Colors.RESET}")
else: else:
@ -232,14 +228,11 @@ def execute_agent_task(assistant, role, task):
if not hasattr(assistant, "enhanced"): if not hasattr(assistant, "enhanced"):
print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}") print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}")
return return
print(f"{Colors.YELLOW}Creating {role} agent...{Colors.RESET}") print(f"{Colors.YELLOW}Creating {role} agent...{Colors.RESET}")
agent_id = assistant.enhanced.create_agent(role) agent_id = assistant.enhanced.create_agent(role)
print(f"{Colors.GREEN}Agent created: {agent_id}{Colors.RESET}") print(f"{Colors.GREEN}Agent created: {agent_id}{Colors.RESET}")
print(f"{Colors.YELLOW}Executing task...{Colors.RESET}") print(f"{Colors.YELLOW}Executing task...{Colors.RESET}")
result = assistant.enhanced.agent_task(agent_id, task) result = assistant.enhanced.agent_task(agent_id, task)
if "error" in result: if "error" in result:
print(f"{Colors.RED}Error: {result['error']}{Colors.RESET}") print(f"{Colors.RED}Error: {result['error']}{Colors.RESET}")
else: else:
@ -251,11 +244,9 @@ def show_agents(assistant):
if not hasattr(assistant, "enhanced"): if not hasattr(assistant, "enhanced"):
print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}") print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}")
return return
summary = assistant.enhanced.get_agent_summary() summary = assistant.enhanced.get_agent_summary()
print(f"\n{Colors.BOLD}Agent Session Summary:{Colors.RESET}") print(f"\n{Colors.BOLD}Agent Session Summary:{Colors.RESET}")
print(f"Active agents: {summary['active_agents']}") print(f"Active agents: {summary['active_agents']}")
if summary["agents"]: if summary["agents"]:
for agent in summary["agents"]: for agent in summary["agents"]:
print(f"\n{Colors.CYAN}{agent['agent_id']}{Colors.RESET}") print(f"\n{Colors.CYAN}{agent['agent_id']}{Colors.RESET}")
@ -268,17 +259,13 @@ def collaborate_agents_command(assistant, task):
if not hasattr(assistant, "enhanced"): if not hasattr(assistant, "enhanced"):
print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}") print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}")
return return
print(f"{Colors.YELLOW}Initiating agent collaboration...{Colors.RESET}") print(f"{Colors.YELLOW}Initiating agent collaboration...{Colors.RESET}")
roles = ["coding", "research", "planning"] roles = ["coding", "research", "planning"]
result = assistant.enhanced.collaborate_agents(task, roles) result = assistant.enhanced.collaborate_agents(task, roles)
print(f"\n{Colors.GREEN}Collaboration completed{Colors.RESET}") print(f"\n{Colors.GREEN}Collaboration completed{Colors.RESET}")
print(f"\nOrchestrator response:") print(f"\nOrchestrator response:")
if "orchestrator" in result and "response" in result["orchestrator"]: if "orchestrator" in result and "response" in result["orchestrator"]:
print(result["orchestrator"]["response"]) print(result["orchestrator"]["response"])
if result.get("agents"): if result.get("agents"):
print(f"\n{Colors.BOLD}Agent Results:{Colors.RESET}") print(f"\n{Colors.BOLD}Agent Results:{Colors.RESET}")
for agent_result in result["agents"]: for agent_result in result["agents"]:
@ -291,13 +278,10 @@ def search_knowledge(assistant, query):
if not hasattr(assistant, "enhanced"): if not hasattr(assistant, "enhanced"):
print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}") print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}")
return return
results = assistant.enhanced.search_knowledge(query) results = assistant.enhanced.search_knowledge(query)
if not results: if not results:
print(f"{Colors.YELLOW}No knowledge entries found for: {query}{Colors.RESET}") print(f"{Colors.YELLOW}No knowledge entries found for: {query}{Colors.RESET}")
return return
print(f"\n{Colors.BOLD}Knowledge Search Results:{Colors.RESET}") print(f"\n{Colors.BOLD}Knowledge Search Results:{Colors.RESET}")
for entry in results: for entry in results:
print(f"\n{Colors.CYAN}[{entry.category}]{Colors.RESET}") print(f"\n{Colors.CYAN}[{entry.category}]{Colors.RESET}")
@ -309,15 +293,12 @@ def store_knowledge(assistant, content):
if not hasattr(assistant, "enhanced"): if not hasattr(assistant, "enhanced"):
print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}") print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}")
return return
import time import time
import uuid import uuid
from rp.memory import KnowledgeEntry
from pr.memory import KnowledgeEntry
categories = assistant.enhanced.fact_extractor.categorize_content(content) categories = assistant.enhanced.fact_extractor.categorize_content(content)
entry_id = str(uuid.uuid4())[:16] entry_id = str(uuid.uuid4())[:16]
entry = KnowledgeEntry( entry = KnowledgeEntry(
entry_id=entry_id, entry_id=entry_id,
category=categories[0] if categories else "general", category=categories[0] if categories else "general",
@ -326,7 +307,6 @@ def store_knowledge(assistant, content):
created_at=time.time(), created_at=time.time(),
updated_at=time.time(), updated_at=time.time(),
) )
assistant.enhanced.knowledge_store.add_entry(entry) assistant.enhanced.knowledge_store.add_entry(entry)
print(f"{Colors.GREEN}Knowledge stored successfully{Colors.RESET}") print(f"{Colors.GREEN}Knowledge stored successfully{Colors.RESET}")
print(f"Entry ID: {entry_id}") print(f"Entry ID: {entry_id}")
@ -337,13 +317,10 @@ def show_conversation_history(assistant):
if not hasattr(assistant, "enhanced"): if not hasattr(assistant, "enhanced"):
print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}") print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}")
return return
history = assistant.enhanced.get_conversation_history(limit=10) history = assistant.enhanced.get_conversation_history(limit=10)
if not history: if not history:
print(f"{Colors.YELLOW}No conversation history found{Colors.RESET}") print(f"{Colors.YELLOW}No conversation history found{Colors.RESET}")
return return
print(f"\n{Colors.BOLD}Recent Conversations:{Colors.RESET}") print(f"\n{Colors.BOLD}Recent Conversations:{Colors.RESET}")
for conv in history: for conv in history:
import datetime import datetime
@ -362,11 +339,8 @@ def show_cache_stats(assistant):
if not hasattr(assistant, "enhanced"): if not hasattr(assistant, "enhanced"):
print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}") print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}")
return return
stats = assistant.enhanced.get_cache_statistics() stats = assistant.enhanced.get_cache_statistics()
print(f"\n{Colors.BOLD}Cache Statistics:{Colors.RESET}") print(f"\n{Colors.BOLD}Cache Statistics:{Colors.RESET}")
if "api_cache" in stats: if "api_cache" in stats:
api_stats = stats["api_cache"] api_stats = stats["api_cache"]
print(f"\n{Colors.CYAN}API Cache:{Colors.RESET}") print(f"\n{Colors.CYAN}API Cache:{Colors.RESET}")
@ -374,14 +348,13 @@ def show_cache_stats(assistant):
print(f" Valid entries: {api_stats['valid_entries']}") print(f" Valid entries: {api_stats['valid_entries']}")
print(f" Expired entries: {api_stats['expired_entries']}") print(f" Expired entries: {api_stats['expired_entries']}")
print(f" Cached tokens: {api_stats['total_cached_tokens']}") print(f" Cached tokens: {api_stats['total_cached_tokens']}")
print(f" Total cache hits: {api_stats['total_cache_hits']}")
if "tool_cache" in stats: if "tool_cache" in stats:
tool_stats = stats["tool_cache"] tool_stats = stats["tool_cache"]
print(f"\n{Colors.CYAN}Tool Cache:{Colors.RESET}") print(f"\n{Colors.CYAN}Tool Cache:{Colors.RESET}")
print(f" Total entries: {tool_stats['total_entries']}") print(f" Total entries: {tool_stats['total_entries']}")
print(f" Valid entries: {tool_stats['valid_entries']}") print(f" Valid entries: {tool_stats['valid_entries']}")
print(f" Total cache hits: {tool_stats['total_cache_hits']}") print(f" Total cache hits: {tool_stats['total_cache_hits']}")
if tool_stats.get("by_tool"): if tool_stats.get("by_tool"):
print(f"\n Per-tool statistics:") print(f"\n Per-tool statistics:")
for tool_name, tool_stat in tool_stats["by_tool"].items(): for tool_name, tool_stat in tool_stats["by_tool"].items():
@ -394,7 +367,6 @@ def clear_caches(assistant):
if not hasattr(assistant, "enhanced"): if not hasattr(assistant, "enhanced"):
print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}") print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}")
return return
assistant.enhanced.clear_caches() assistant.enhanced.clear_caches()
print(f"{Colors.GREEN}All caches cleared successfully{Colors.RESET}") print(f"{Colors.GREEN}All caches cleared successfully{Colors.RESET}")
@ -403,22 +375,17 @@ def show_system_stats(assistant):
if not hasattr(assistant, "enhanced"): if not hasattr(assistant, "enhanced"):
print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}") print(f"{Colors.YELLOW}Enhanced features not initialized{Colors.RESET}")
return return
print(f"\n{Colors.BOLD}System Statistics:{Colors.RESET}") print(f"\n{Colors.BOLD}System Statistics:{Colors.RESET}")
cache_stats = assistant.enhanced.get_cache_statistics() cache_stats = assistant.enhanced.get_cache_statistics()
knowledge_stats = assistant.enhanced.get_knowledge_statistics() knowledge_stats = assistant.enhanced.get_knowledge_statistics()
agent_summary = assistant.enhanced.get_agent_summary() agent_summary = assistant.enhanced.get_agent_summary()
print(f"\n{Colors.CYAN}Knowledge Base:{Colors.RESET}") print(f"\n{Colors.CYAN}Knowledge Base:{Colors.RESET}")
print(f" Total entries: {knowledge_stats['total_entries']}") print(f" Total entries: {knowledge_stats['total_entries']}")
print(f" Categories: {knowledge_stats['total_categories']}") print(f" Categories: {knowledge_stats['total_categories']}")
print(f" Total accesses: {knowledge_stats['total_accesses']}") print(f" Total accesses: {knowledge_stats['total_accesses']}")
print(f" Vocabulary size: {knowledge_stats['vocabulary_size']}") print(f" Vocabulary size: {knowledge_stats['vocabulary_size']}")
print(f"\n{Colors.CYAN}Active Agents:{Colors.RESET}") print(f"\n{Colors.CYAN}Active Agents:{Colors.RESET}")
print(f" Count: {agent_summary['active_agents']}") print(f" Count: {agent_summary['active_agents']}")
if "api_cache" in cache_stats: if "api_cache" in cache_stats:
print(f"\n{Colors.CYAN}Caching:{Colors.RESET}") print(f"\n{Colors.CYAN}Caching:{Colors.RESET}")
print(f" API cache entries: {cache_stats['api_cache']['valid_entries']}") print(f" API cache entries: {cache_stats['api_cache']['valid_entries']}")
@ -435,9 +402,7 @@ def handle_background_command(assistant, command):
f"{Colors.GRAY}Available subcommands: start, list, status, output, input, kill, events{Colors.RESET}" f"{Colors.GRAY}Available subcommands: start, list, status, output, input, kill, events{Colors.RESET}"
) )
return return
subcmd = parts[1].lower() subcmd = parts[1].lower()
try: try:
if subcmd == "start" and len(parts) >= 3: if subcmd == "start" and len(parts) >= 3:
session_name = f"bg_{len(parts[2].split())}_{int(time.time())}" session_name = f"bg_{len(parts[2].split())}_{int(time.time())}"
@ -466,10 +431,9 @@ def handle_background_command(assistant, command):
def start_background_session(assistant, session_name, command): def start_background_session(assistant, session_name, command):
"""Start a command in background.""" """Start a command in background."""
try: try:
from pr.multiplexer import start_background_process from rp.multiplexer import start_background_process
result = start_background_process(session_name, command) result = start_background_process(session_name, command)
if result["status"] == "success": if result["status"] == "success":
print( print(
f"{Colors.GREEN}Started background session '{session_name}' with PID {result['pid']}{Colors.RESET}" f"{Colors.GREEN}Started background session '{session_name}' with PID {result['pid']}{Colors.RESET}"
@ -485,8 +449,8 @@ def start_background_session(assistant, session_name, command):
def list_background_sessions(assistant): def list_background_sessions(assistant):
"""List all background sessions.""" """List all background sessions."""
try: try:
from pr.multiplexer import get_all_sessions from rp.multiplexer import get_all_sessions
from pr.ui.display import display_multiplexer_status from rp.ui.display import display_multiplexer_status
sessions = get_all_sessions() sessions = get_all_sessions()
display_multiplexer_status(sessions) display_multiplexer_status(sessions)
@ -497,7 +461,7 @@ def list_background_sessions(assistant):
def show_session_status(assistant, session_name): def show_session_status(assistant, session_name):
"""Show status of a specific session.""" """Show status of a specific session."""
try: try:
from pr.multiplexer import get_session_info from rp.multiplexer import get_session_info
info = get_session_info(session_name) info = get_session_info(session_name)
if info: if info:
@ -519,7 +483,7 @@ def show_session_status(assistant, session_name):
def show_session_output(assistant, session_name): def show_session_output(assistant, session_name):
"""Show output of a specific session.""" """Show output of a specific session."""
try: try:
from pr.multiplexer import get_session_output from rp.multiplexer import get_session_output
output = get_session_output(session_name, lines=50) output = get_session_output(session_name, lines=50)
if output: if output:
@ -536,7 +500,7 @@ def show_session_output(assistant, session_name):
def send_session_input(assistant, session_name, input_text): def send_session_input(assistant, session_name, input_text):
"""Send input to a background session.""" """Send input to a background session."""
try: try:
from pr.multiplexer import send_input_to_session from rp.multiplexer import send_input_to_session
result = send_input_to_session(session_name, input_text) result = send_input_to_session(session_name, input_text)
if result["status"] == "success": if result["status"] == "success":
@ -552,7 +516,7 @@ def send_session_input(assistant, session_name, input_text):
def kill_background_session(assistant, session_name): def kill_background_session(assistant, session_name):
"""Kill a background session.""" """Kill a background session."""
try: try:
from pr.multiplexer import kill_session from rp.multiplexer import kill_session
result = kill_session(session_name) result = kill_session(session_name)
if result["status"] == "success": if result["status"] == "success":
@ -568,17 +532,15 @@ def kill_background_session(assistant, session_name):
def show_background_events(assistant): def show_background_events(assistant):
"""Show recent background events.""" """Show recent background events."""
try: try:
from pr.core.background_monitor import get_global_monitor from rp.core.background_monitor import get_global_monitor
monitor = get_global_monitor() monitor = get_global_monitor()
events = monitor.get_pending_events() events = monitor.get_pending_events()
if events: if events:
print(f"{Colors.BOLD}Recent Background Events:{Colors.RESET}") print(f"{Colors.BOLD}Recent Background Events:{Colors.RESET}")
print(f"{Colors.GRAY}{'' * 60}{Colors.RESET}") print(f"{Colors.GRAY}{'' * 60}{Colors.RESET}")
for event in events[-10:]:
for event in events[-10:]: # Show last 10 events from rp.ui.display import display_background_event
from pr.ui.display import display_background_event
display_background_event(event) display_background_event(event)
else: else:

25
rp/commands/help_docs.py Normal file

File diff suppressed because one or more lines are too long

View File

@ -1,41 +1,34 @@
from pr.multiplexer import get_multiplexer from rp.multiplexer import get_multiplexer
from pr.tools.interactive_control import ( from rp.tools.interactive_control import (
close_interactive_session, close_interactive_session,
get_session_status, get_session_status,
list_active_sessions, list_active_sessions,
read_session_output, read_session_output,
send_input_to_session, send_input_to_session,
) )
from pr.tools.prompt_detection import get_global_detector from rp.tools.prompt_detection import get_global_detector
from pr.ui import Colors from rp.ui import Colors
def show_sessions(args=None): def show_sessions(args=None):
"""Show all active multiplexer sessions.""" """Show all active multiplexer sessions."""
sessions = list_active_sessions() sessions = list_active_sessions()
if not sessions: if not sessions:
print(f"{Colors.YELLOW}No active sessions.{Colors.RESET}") print(f"{Colors.YELLOW}No active sessions.{Colors.RESET}")
return return
print(f"{Colors.BOLD}Active Sessions:{Colors.RESET}") print(f"{Colors.BOLD}Active Sessions:{Colors.RESET}")
print("-" * 80) print("-" * 80)
for session_name, session_data in sessions.items(): for session_name, session_data in sessions.items():
metadata = session_data["metadata"] metadata = session_data["metadata"]
output_summary = session_data["output_summary"] output_summary = session_data["output_summary"]
status = get_session_status(session_name) status = get_session_status(session_name)
is_active = status.get("is_active", False) if status else False is_active = status.get("is_active", False) if status else False
status_color = Colors.GREEN if is_active else Colors.RED status_color = Colors.GREEN if is_active else Colors.RED
print( print(
f"{Colors.CYAN}{session_name}{Colors.RESET}: {status_color}{metadata.get('process_type', 'unknown')}{Colors.RESET}" f"{Colors.CYAN}{session_name}{Colors.RESET}: {status_color}{metadata.get('process_type', 'unknown')}{Colors.RESET}"
) )
if status and "pid" in status: if status and "pid" in status:
print(f" PID: {status['pid']}") print(f" PID: {status['pid']}")
print(f" Age: {metadata.get('start_time', 0):.1f}s") print(f" Age: {metadata.get('start_time', 0):.1f}s")
print( print(
f" Output: {output_summary['stdout_lines']} stdout, {output_summary['stderr_lines']} stderr lines" f" Output: {output_summary['stdout_lines']} stdout, {output_summary['stderr_lines']} stderr lines"
@ -50,19 +43,14 @@ def attach_session(args):
if not args or len(args) < 1: if not args or len(args) < 1:
print(f"{Colors.RED}Usage: attach_session <session_name>{Colors.RESET}") print(f"{Colors.RED}Usage: attach_session <session_name>{Colors.RESET}")
return return
session_name = args[0] session_name = args[0]
status = get_session_status(session_name) status = get_session_status(session_name)
if not status: if not status:
print(f"{Colors.RED}Session '{session_name}' not found.{Colors.RESET}") print(f"{Colors.RED}Session '{session_name}' not found.{Colors.RESET}")
return return
print(f"{Colors.BOLD}Attaching to session: {session_name}{Colors.RESET}") print(f"{Colors.BOLD}Attaching to session: {session_name}{Colors.RESET}")
print(f"Process type: {status.get('metadata', {}).get('process_type', 'unknown')}") print(f"Process type: {status.get('metadata', {}).get('process_type', 'unknown')}")
print("-" * 50) print("-" * 50)
# Show recent output
try: try:
output = read_session_output(session_name, lines=20) output = read_session_output(session_name, lines=20)
if output["stdout"]: if output["stdout"]:
@ -77,9 +65,8 @@ def attach_session(args):
print(f" {line}") print(f" {line}")
except Exception as e: except Exception as e:
print(f"{Colors.RED}Error reading output: {e}{Colors.RESET}") print(f"{Colors.RED}Error reading output: {e}{Colors.RESET}")
print( print(
f"\n{Colors.CYAN}Session is {'active' if status.get('is_active') else 'inactive'}{Colors.RESET}" f"\n{Colors.CYAN}Session is {('active' if status.get('is_active') else 'inactive')}{Colors.RESET}"
) )
@ -88,16 +75,11 @@ def detach_session(args):
if not args or len(args) < 1: if not args or len(args) < 1:
print(f"{Colors.RED}Usage: detach_session <session_name>{Colors.RESET}") print(f"{Colors.RED}Usage: detach_session <session_name>{Colors.RESET}")
return return
session_name = args[0] session_name = args[0]
mux = get_multiplexer(session_name) mux = get_multiplexer(session_name)
if not mux: if not mux:
print(f"{Colors.RED}Session '{session_name}' not found.{Colors.RESET}") print(f"{Colors.RED}Session '{session_name}' not found.{Colors.RESET}")
return return
# In this implementation, detaching just means we stop displaying output
# The session continues to run in the background
mux.show_output = False mux.show_output = False
print( print(
f"{Colors.GREEN}Detached from session '{session_name}'. It continues running in background.{Colors.RESET}" f"{Colors.GREEN}Detached from session '{session_name}'. It continues running in background.{Colors.RESET}"
@ -109,9 +91,7 @@ def kill_session(args):
if not args or len(args) < 1: if not args or len(args) < 1:
print(f"{Colors.RED}Usage: kill_session <session_name>{Colors.RESET}") print(f"{Colors.RED}Usage: kill_session <session_name>{Colors.RESET}")
return return
session_name = args[0] session_name = args[0]
try: try:
close_interactive_session(session_name) close_interactive_session(session_name)
print(f"{Colors.GREEN}Session '{session_name}' terminated.{Colors.RESET}") print(f"{Colors.GREEN}Session '{session_name}' terminated.{Colors.RESET}")
@ -124,10 +104,8 @@ def send_command(args):
if not args or len(args) < 2: if not args or len(args) < 2:
print(f"{Colors.RED}Usage: send_command <session_name> <command>{Colors.RESET}") print(f"{Colors.RED}Usage: send_command <session_name> <command>{Colors.RESET}")
return return
session_name = args[0] session_name = args[0]
command = " ".join(args[1:]) command = " ".join(args[1:])
try: try:
send_input_to_session(session_name, command) send_input_to_session(session_name, command)
print(f"{Colors.GREEN}Sent command to '{session_name}': {command}{Colors.RESET}") print(f"{Colors.GREEN}Sent command to '{session_name}': {command}{Colors.RESET}")
@ -140,24 +118,19 @@ def show_session_log(args):
if not args or len(args) < 1: if not args or len(args) < 1:
print(f"{Colors.RED}Usage: show_session_log <session_name>{Colors.RESET}") print(f"{Colors.RED}Usage: show_session_log <session_name>{Colors.RESET}")
return return
session_name = args[0] session_name = args[0]
try: try:
output = read_session_output(session_name) # Get all output output = read_session_output(session_name)
print(f"{Colors.BOLD}Full log for session: {session_name}{Colors.RESET}") print(f"{Colors.BOLD}Full log for session: {session_name}{Colors.RESET}")
print("=" * 80) print("=" * 80)
if output["stdout"]: if output["stdout"]:
print(f"{Colors.GRAY}STDOUT:{Colors.RESET}") print(f"{Colors.GRAY}STDOUT:{Colors.RESET}")
print(output["stdout"]) print(output["stdout"])
print() print()
if output["stderr"]: if output["stderr"]:
print(f"{Colors.YELLOW}STDERR:{Colors.RESET}") print(f"{Colors.YELLOW}STDERR:{Colors.RESET}")
print(output["stderr"]) print(output["stderr"])
print() print()
except Exception as e: except Exception as e:
print(f"{Colors.RED}Error reading log for '{session_name}': {e}{Colors.RESET}") print(f"{Colors.RED}Error reading log for '{session_name}': {e}{Colors.RESET}")
@ -167,35 +140,26 @@ def show_session_status(args):
if not args or len(args) < 1: if not args or len(args) < 1:
print(f"{Colors.RED}Usage: show_session_status <session_name>{Colors.RESET}") print(f"{Colors.RED}Usage: show_session_status <session_name>{Colors.RESET}")
return return
session_name = args[0] session_name = args[0]
status = get_session_status(session_name) status = get_session_status(session_name)
if not status: if not status:
print(f"{Colors.RED}Session '{session_name}' not found.{Colors.RESET}") print(f"{Colors.RED}Session '{session_name}' not found.{Colors.RESET}")
return return
print(f"{Colors.BOLD}Status for session: {session_name}{Colors.RESET}") print(f"{Colors.BOLD}Status for session: {session_name}{Colors.RESET}")
print("-" * 50) print("-" * 50)
metadata = status.get("metadata", {}) metadata = status.get("metadata", {})
print(f"Process type: {metadata.get('process_type', 'unknown')}") print(f"Process type: {metadata.get('process_type', 'unknown')}")
print(f"Active: {status.get('is_active', False)}") print(f"Active: {status.get('is_active', False)}")
if "pid" in status: if "pid" in status:
print(f"PID: {status['pid']}") print(f"PID: {status['pid']}")
print(f"Start time: {metadata.get('start_time', 0):.1f}") print(f"Start time: {metadata.get('start_time', 0):.1f}")
print(f"Last activity: {metadata.get('last_activity', 0):.1f}") print(f"Last activity: {metadata.get('last_activity', 0):.1f}")
print(f"Interaction count: {metadata.get('interaction_count', 0)}") print(f"Interaction count: {metadata.get('interaction_count', 0)}")
print(f"State: {metadata.get('state', 'unknown')}") print(f"State: {metadata.get('state', 'unknown')}")
output_summary = status.get("output_summary", {}) output_summary = status.get("output_summary", {})
print( print(
f"Output lines: {output_summary.get('stdout_lines', 0)} stdout, {output_summary.get('stderr_lines', 0)} stderr" f"Output lines: {output_summary.get('stdout_lines', 0)} stdout, {output_summary.get('stderr_lines', 0)} stderr"
) )
# Show prompt detection info
detector = get_global_detector() detector = get_global_detector()
session_info = detector.get_session_info(session_name) session_info = detector.get_session_info(session_name)
if session_info: if session_info:
@ -207,33 +171,27 @@ def list_waiting_sessions(args=None):
"""List sessions that appear to be waiting for input.""" """List sessions that appear to be waiting for input."""
sessions = list_active_sessions() sessions = list_active_sessions()
detector = get_global_detector() detector = get_global_detector()
waiting_sessions = [] waiting_sessions = []
for session_name in sessions: for session_name in sessions:
if detector.is_waiting_for_input(session_name): if detector.is_waiting_for_input(session_name):
waiting_sessions.append(session_name) waiting_sessions.append(session_name)
if not waiting_sessions: if not waiting_sessions:
print(f"{Colors.GREEN}No sessions are currently waiting for input.{Colors.RESET}") print(f"{Colors.GREEN}No sessions are currently waiting for input.{Colors.RESET}")
return return
print(f"{Colors.BOLD}Sessions waiting for input:{Colors.RESET}") print(f"{Colors.BOLD}Sessions waiting for input:{Colors.RESET}")
for session_name in waiting_sessions: for session_name in waiting_sessions:
status = get_session_status(session_name) status = get_session_status(session_name)
if status: if status:
process_type = status.get("metadata", {}).get("process_type", "unknown") process_type = status.get("metadata", {}).get("process_type", "unknown")
print(f" {Colors.CYAN}{session_name}{Colors.RESET} ({process_type})") print(f" {Colors.CYAN}{session_name}{Colors.RESET} ({process_type})")
# Show suggestions
session_info = detector.get_session_info(session_name) session_info = detector.get_session_info(session_name)
if session_info: if session_info:
suggestions = detector.get_response_suggestions({}, process_type) suggestions = detector.get_response_suggestions({}, process_type)
if suggestions: if suggestions:
print(f" Suggested inputs: {', '.join(suggestions[:3])}") # Show first 3 print(f" Suggested inputs: {', '.join(suggestions[:3])}")
print() print()
# Command registry for the multiplexer commands
MULTIPLEXER_COMMANDS = { MULTIPLEXER_COMMANDS = {
"show_sessions": show_sessions, "show_sessions": show_sessions,
"attach_session": attach_session, "attach_session": attach_session,

View File

@ -3,20 +3,19 @@ import os
DEFAULT_MODEL = "x-ai/grok-code-fast-1" DEFAULT_MODEL = "x-ai/grok-code-fast-1"
DEFAULT_API_URL = "https://static.molodetz.nl/rp.cgi/api/v1/chat/completions" DEFAULT_API_URL = "https://static.molodetz.nl/rp.cgi/api/v1/chat/completions"
MODEL_LIST_URL = "https://static.molodetz.nl/rp.cgi/api/v1/models" MODEL_LIST_URL = "https://static.molodetz.nl/rp.cgi/api/v1/models"
config_directory = os.path.expanduser("~/.local/share/rp")
DB_PATH = os.path.expanduser("~/.assistant_db.sqlite") os.makedirs(config_directory, exist_ok=True)
LOG_FILE = os.path.expanduser("~/.assistant_error.log") DB_PATH = os.path.join(config_directory, "assistant_db.sqlite")
LOG_FILE = os.path.join(config_directory, "assistant_error.log")
CONTEXT_FILE = ".rcontext.txt" CONTEXT_FILE = ".rcontext.txt"
GLOBAL_CONTEXT_FILE = os.path.expanduser("~/.rcontext.txt") GLOBAL_CONTEXT_FILE = os.path.join(config_directory, "rcontext.txt")
HISTORY_FILE = os.path.expanduser("~/.assistant_history") KNOWLEDGE_PATH = os.path.join(config_directory, "knowledge")
HISTORY_FILE = os.path.join(config_directory, "assistant_history")
DEFAULT_TEMPERATURE = 0.1 DEFAULT_TEMPERATURE = 0.1
DEFAULT_MAX_TOKENS = 4096 DEFAULT_MAX_TOKENS = 4096
MAX_AUTONOMOUS_ITERATIONS = 50 MAX_AUTONOMOUS_ITERATIONS = 50
CONTEXT_COMPRESSION_THRESHOLD = 15 CONTEXT_COMPRESSION_THRESHOLD = 15
RECENT_MESSAGES_TO_KEEP = 20 RECENT_MESSAGES_TO_KEEP = 20
API_TOTAL_TOKEN_LIMIT = 256000 API_TOTAL_TOKEN_LIMIT = 256000
MAX_OUTPUT_TOKENS = 30000 MAX_OUTPUT_TOKENS = 30000
SAFETY_BUFFER_TOKENS = 30000 SAFETY_BUFFER_TOKENS = 30000
@ -25,7 +24,6 @@ CHARS_PER_TOKEN = 2.0
EMERGENCY_MESSAGES_TO_KEEP = 3 EMERGENCY_MESSAGES_TO_KEEP = 3
CONTENT_TRIM_LENGTH = 30000 CONTENT_TRIM_LENGTH = 30000
MAX_TOOL_RESULT_LENGTH = 30000 MAX_TOOL_RESULT_LENGTH = 30000
LANGUAGE_KEYWORDS = { LANGUAGE_KEYWORDS = {
"python": [ "python": [
"def", "def",
@ -100,54 +98,41 @@ LANGUAGE_KEYWORDS = {
"finally", "finally",
], ],
} }
CACHE_ENABLED = True CACHE_ENABLED = True
API_CACHE_TTL = 3600 API_CACHE_TTL = 3600
TOOL_CACHE_TTL = 300 TOOL_CACHE_TTL = 300
WORKFLOW_MAX_RETRIES = 3 WORKFLOW_MAX_RETRIES = 3
WORKFLOW_DEFAULT_TIMEOUT = 300 WORKFLOW_DEFAULT_TIMEOUT = 300
WORKFLOW_EXECUTOR_MAX_WORKERS = 5 WORKFLOW_EXECUTOR_MAX_WORKERS = 5
AGENT_DEFAULT_TEMPERATURE = 0.7 AGENT_DEFAULT_TEMPERATURE = 0.7
AGENT_MAX_WORKERS = 3 AGENT_MAX_WORKERS = 3
AGENT_SESSION_TIMEOUT = 7200 AGENT_SESSION_TIMEOUT = 7200
KNOWLEDGE_IMPORTANCE_THRESHOLD = 0.5 KNOWLEDGE_IMPORTANCE_THRESHOLD = 0.5
KNOWLEDGE_SEARCH_LIMIT = 5 KNOWLEDGE_SEARCH_LIMIT = 5
MEMORY_AUTO_SUMMARIZE = True MEMORY_AUTO_SUMMARIZE = True
CONVERSATION_SUMMARY_THRESHOLD = 20 CONVERSATION_SUMMARY_THRESHOLD = 20
ADVANCED_CONTEXT_ENABLED = True ADVANCED_CONTEXT_ENABLED = True
CONTEXT_RELEVANCE_THRESHOLD = 0.3 CONTEXT_RELEVANCE_THRESHOLD = 0.3
ADAPTIVE_CONTEXT_MIN = 10 ADAPTIVE_CONTEXT_MIN = 10
ADAPTIVE_CONTEXT_MAX = 50 ADAPTIVE_CONTEXT_MAX = 50
# Background monitoring and multiplexer configuration
BACKGROUND_MONITOR_ENABLED = True BACKGROUND_MONITOR_ENABLED = True
BACKGROUND_MONITOR_INTERVAL = 5.0 # seconds BACKGROUND_MONITOR_INTERVAL = 5.0
AUTONOMOUS_INTERACTION_INTERVAL = 10.0 # seconds AUTONOMOUS_INTERACTION_INTERVAL = 10.0
MULTIPLEXER_BUFFER_SIZE = 1000 # lines MULTIPLEXER_BUFFER_SIZE = 1000
MULTIPLEXER_OUTPUT_TIMEOUT = 30 # seconds MULTIPLEXER_OUTPUT_TIMEOUT = 30
MAX_CONCURRENT_SESSIONS = 10 MAX_CONCURRENT_SESSIONS = 10
# Process-specific timeouts (seconds)
PROCESS_TIMEOUTS = { PROCESS_TIMEOUTS = {
"default": 300, # 5 minutes "default": 300,
"apt": 600, # 10 minutes "apt": 600,
"ssh": 60, # 1 minute "ssh": 60,
"vim": 3600, # 1 hour "vim": 3600,
"git": 300, # 5 minutes "git": 300,
"npm": 600, # 10 minutes "npm": 600,
"pip": 300, # 5 minutes "pip": 300,
} }
HIGH_OUTPUT_THRESHOLD = 50
# Activity thresholds for LLM notification INACTIVE_THRESHOLD = 300
HIGH_OUTPUT_THRESHOLD = 50 # lines SESSION_NOTIFY_INTERVAL = 60
INACTIVE_THRESHOLD = 300 # seconds
SESSION_NOTIFY_INTERVAL = 60 # seconds
# Autonomous behavior flags
ENABLE_AUTONOMOUS_SESSIONS = True ENABLE_AUTONOMOUS_SESSIONS = True
ENABLE_BACKGROUND_UPDATES = True ENABLE_BACKGROUND_UPDATES = True
ENABLE_TIMEOUT_DETECTION = True ENABLE_TIMEOUT_DETECTION = True

5
rp/core/__init__.py Normal file
View File

@ -0,0 +1,5 @@
from rp.core.api import call_api, list_models
from rp.core.assistant import Assistant
from rp.core.context import init_system_message, manage_context_window
__all__ = ["Assistant", "call_api", "list_models", "init_system_message", "manage_context_window"]

137
rp/core/advanced_context.py Normal file
View File

@ -0,0 +1,137 @@
import re
from typing import Any, Dict, List
class AdvancedContextManager:
def __init__(self, knowledge_store=None, conversation_memory=None):
self.knowledge_store = knowledge_store
self.conversation_memory = conversation_memory
def adaptive_context_window(self, messages: List[Dict[str, Any]], complexity: str) -> int:
"""Calculate adaptive context window size based on message complexity."""
base_window = 10
complexity_multipliers = {"simple": 1.0, "medium": 2.0, "complex": 3.5, "very_complex": 5.0}
multiplier = complexity_multipliers.get(complexity, 2.0)
return int(base_window * multiplier)
def _analyze_message_complexity(self, messages: List[Dict[str, Any]]) -> float:
"""Analyze the complexity of messages and return a score between 0.0 and 1.0."""
if not messages:
return 0.0
total_complexity = 0.0
for message in messages:
content = message.get("content", "")
if not content:
continue
word_count = len(content.split())
sentence_count = len(re.split("[.!?]+", content))
avg_word_length = sum((len(word) for word in content.split())) / max(word_count, 1)
length_score = min(1.0, word_count / 100)
structure_score = min(1.0, sentence_count / 10)
vocabulary_score = min(1.0, avg_word_length / 8)
message_complexity = (length_score + structure_score + vocabulary_score) / 3
total_complexity += message_complexity
return min(1.0, total_complexity / len(messages))
def extract_key_sentences(self, text: str, top_k: int = 5) -> List[str]:
if not text.strip():
return []
sentences = re.split("(?<=[.!?])\\s+", text)
if not sentences:
return []
scored_sentences = []
for i, sentence in enumerate(sentences):
length_score = min(1.0, len(sentence) / 50)
position_score = 1.0 if i == 0 else 0.8 if i < len(sentences) / 2 else 0.6
score = (length_score + position_score) / 2
scored_sentences.append((sentence, score))
scored_sentences.sort(key=lambda x: x[1], reverse=True)
return [s[0] for s in scored_sentences[:top_k]]
def advanced_summarize_messages(self, messages: List[Dict[str, Any]]) -> str:
all_content = " ".join([msg.get("content", "") for msg in messages])
key_sentences = self.extract_key_sentences(all_content, top_k=3)
summary = " ".join(key_sentences)
return summary if summary else "No content to summarize."
def score_message_relevance(self, message: Dict[str, Any], context: str) -> float:
content = message.get("content", "")
content_words = set(re.findall("\\b\\w+\\b", content.lower()))
context_words = set(re.findall("\\b\\w+\\b", context.lower()))
intersection = content_words & context_words
union = content_words | context_words
if not union:
return 0.0
return len(intersection) / len(union)
def create_enhanced_context(
self, messages: List[Dict[str, Any]], user_message: str, include_knowledge: bool = True
) -> tuple:
"""Create enhanced context with knowledge base and conversation memory integration."""
working_messages = messages.copy()
all_results = []
# Search knowledge base
if include_knowledge and self.knowledge_store:
knowledge_results = self.knowledge_store.search_entries(user_message, top_k=3)
for entry in knowledge_results:
score = entry.metadata.get("search_score", 0.5)
all_results.append(
{
"content": entry.content,
"score": score,
"source": f"Knowledge Base ({entry.category})",
"type": "knowledge",
}
)
# Search conversation memory
if self.conversation_memory:
from rp.core.knowledge_context import calculate_text_similarity
history_results = self.conversation_memory.search_conversations(user_message, limit=3)
for conv in history_results:
conv_messages = self.conversation_memory.get_conversation_messages(
conv["conversation_id"]
)
for msg in conv_messages[-5:]: # Last 5 messages from each conversation
if msg["role"] == "user" and msg["content"] != user_message:
relevance = calculate_text_similarity(user_message, msg["content"])
if relevance > 0.3:
all_results.append(
{
"content": msg["content"],
"score": relevance,
"source": f"Previous conversation: {conv['conversation_id'][:8]}",
"type": "conversation",
}
)
# Sort and limit results
all_results.sort(key=lambda x: x["score"], reverse=True)
top_results = all_results[:5]
if top_results:
knowledge_parts = []
for idx, result in enumerate(top_results, 1):
content = result["content"]
if len(content) > 1500:
content = content[:1500] + "..."
score_indicator = f"({result['score']:.2f})" if result["score"] < 1.0 else "(exact)"
knowledge_parts.append(
f"Match {idx} {score_indicator} - {result['source']}:\n{content}"
)
knowledge_message_content = (
"[KNOWLEDGE_BASE_CONTEXT]\nRelevant information from knowledge base and conversation history:\n\n"
+ "\n\n".join(knowledge_parts)
)
knowledge_message = {"role": "user", "content": knowledge_message_content}
working_messages.append(knowledge_message)
context_info = (
f"Added {len(top_results)} matches from knowledge and conversation history"
)
else:
context_info = "No relevant knowledge or conversation matches found"
return (working_messages, context_info)

91
rp/core/api.py Normal file
View File

@ -0,0 +1,91 @@
import json
import logging
from rp.config import DEFAULT_MAX_TOKENS, DEFAULT_TEMPERATURE
from rp.core.context import auto_slim_messages
from rp.core.http_client import http_client
logger = logging.getLogger("rp")
def call_api(messages, model, api_url, api_key, use_tools, tools_definition, verbose=False):
try:
messages = auto_slim_messages(messages, verbose=verbose)
logger.debug(f"=== API CALL START ===")
logger.debug(f"Model: {model}")
logger.debug(f"API URL: {api_url}")
logger.debug(f"Use tools: {use_tools}")
logger.debug(f"Message count: {len(messages)}")
headers = {"Content-Type": "application/json"}
if api_key:
headers["Authorization"] = f"Bearer {api_key}"
data = {
"model": model,
"messages": messages,
"temperature": DEFAULT_TEMPERATURE,
"max_tokens": DEFAULT_MAX_TOKENS,
}
if "gpt-5" in model:
del data["temperature"]
del data["max_tokens"]
logger.debug("GPT-5 detected: removed temperature and max_tokens")
if use_tools:
data["tools"] = tools_definition
data["tool_choice"] = "auto"
logger.debug(f"Tool calling enabled with {len(tools_definition)} tools")
request_json = data
logger.debug(f"Request payload size: {len(request_json)} bytes")
logger.debug("Sending HTTP request...")
response = http_client.post(api_url, headers=headers, json_data=request_json)
if response.get("error"):
if "status" in response:
logger.error(f"API HTTP Error: {response['status']} - {response.get('text', '')}")
logger.debug("=== API CALL FAILED ===")
return {
"error": f"API Error: {response['status']}",
"message": response.get("text", ""),
}
else:
logger.error(f"API call failed: {response.get('exception', 'Unknown error')}")
logger.debug("=== API CALL FAILED ===")
return {"error": response.get("exception", "Unknown error")}
response_data = response["text"]
logger.debug(f"Response received: {len(response_data)} bytes")
result = json.loads(response_data)
if "usage" in result:
logger.debug(f"Token usage: {result['usage']}")
if "choices" in result and result["choices"]:
choice = result["choices"][0]
if "message" in choice:
msg = choice["message"]
logger.debug(f"Response role: {msg.get('role', 'N/A')}")
if "content" in msg and msg["content"]:
logger.debug(f"Response content length: {len(msg['content'])} chars")
if "tool_calls" in msg:
logger.debug(f"Response contains {len(msg['tool_calls'])} tool call(s)")
if verbose and "usage" in result:
from rp.core.usage_tracker import UsageTracker
usage = result["usage"]
input_t = usage.get("prompt_tokens", 0)
output_t = usage.get("completion_tokens", 0)
UsageTracker._calculate_cost(model, input_t, output_t)
logger.debug("=== API CALL END ===")
return result
except Exception as e:
logger.error(f"API call failed: {e}")
logger.debug("=== API CALL FAILED ===")
return {"error": str(e)}
def list_models(model_list_url, api_key):
try:
headers = {}
if api_key:
headers["Authorization"] = f"Bearer {api_key}"
response = http_client.get(model_list_url, headers=headers)
if response.get("error"):
return {"error": response.get("text", "HTTP error")}
data = json.loads(response["text"])
return data.get("data", [])
except Exception as e:
return {"error": str(e)}

View File

@ -8,9 +8,9 @@ import sqlite3
import sys import sys
import traceback import traceback
from concurrent.futures import ThreadPoolExecutor from concurrent.futures import ThreadPoolExecutor
from rp.commands import handle_command
from pr.commands import handle_command from rp.input_handler import get_advanced_input
from pr.config import ( from rp.config import (
DB_PATH, DB_PATH,
DEFAULT_API_URL, DEFAULT_API_URL,
DEFAULT_MODEL, DEFAULT_MODEL,
@ -18,76 +18,70 @@ from pr.config import (
LOG_FILE, LOG_FILE,
MODEL_LIST_URL, MODEL_LIST_URL,
) )
from pr.core.api import call_api from rp.core.api import call_api
from pr.core.autonomous_interactions import ( from rp.core.autonomous_interactions import start_global_autonomous, stop_global_autonomous
get_global_autonomous, from rp.core.background_monitor import get_global_monitor, start_global_monitor, stop_global_monitor
stop_global_autonomous, from rp.core.context import init_system_message, truncate_tool_result
from rp.core.usage_tracker import UsageTracker
from rp.tools import get_tools_definition
from rp.tools.agents import (
collaborate_agents,
create_agent,
execute_agent_task,
list_agents,
remove_agent,
) )
from pr.core.background_monitor import ( from rp.tools.command import kill_process, run_command, tail_process
get_global_monitor, from rp.tools.database import db_get, db_query, db_set
start_global_monitor, from rp.tools.filesystem import (
stop_global_monitor,
)
from pr.core.context import init_system_message, truncate_tool_result
from pr.tools import (
apply_patch,
chdir, chdir,
close_editor,
create_diff,
db_get,
db_query,
db_set,
editor_insert_text,
editor_replace_text,
editor_search,
getpwd,
http_fetch,
index_source_directory,
kill_process,
list_directory,
mkdir,
open_editor,
python_exec,
read_file,
run_command,
search_replace,
tail_process,
web_search,
web_search_news,
write_file,
)
from pr.tools.base import get_tools_definition
from pr.tools.filesystem import (
clear_edit_tracker, clear_edit_tracker,
display_edit_summary, display_edit_summary,
display_edit_timeline, display_edit_timeline,
getpwd,
index_source_directory,
list_directory,
mkdir,
read_file,
search_replace,
write_file,
) )
from pr.tools.interactive_control import ( from rp.tools.interactive_control import (
close_interactive_session, close_interactive_session,
list_active_sessions, list_active_sessions,
read_session_output, read_session_output,
send_input_to_session, send_input_to_session,
start_interactive_session, start_interactive_session,
) )
from pr.tools.patch import display_file_diff from rp.tools.memory import (
from pr.ui import Colors, render_markdown add_knowledge_entry,
delete_knowledge_entry,
get_knowledge_by_category,
get_knowledge_entry,
get_knowledge_statistics,
search_knowledge,
update_knowledge_importance,
)
from rp.tools.patch import apply_patch, create_diff, display_file_diff
from rp.tools.python_exec import python_exec
from rp.tools.web import http_fetch, web_search, web_search_news
from rp.ui import Colors, Spinner, render_markdown
logger = logging.getLogger("pr") logger = logging.getLogger("rp")
logger.setLevel(logging.DEBUG) logger.setLevel(logging.DEBUG)
file_handler = logging.FileHandler(LOG_FILE) file_handler = logging.FileHandler(LOG_FILE)
file_handler.setFormatter(logging.Formatter("%(asctime)s - %(levelname)s - %(message)s")) file_handler.setFormatter(logging.Formatter("%(asctime)s - %(levelname)s - %(message)s"))
logger.addHandler(file_handler) logger.addHandler(file_handler)
class Assistant: class Assistant:
def __init__(self, args): def __init__(self, args):
self.args = args self.args = args
self.messages = [] self.messages = []
self.verbose = args.verbose self.verbose = args.verbose
self.debug = getattr(args, "debug", False) self.debug = getattr(args, "debug", False)
self.syntax_highlighting = not args.no_syntax self.syntax_highlighting = not args.no_syntax
if self.debug: if self.debug:
console_handler = logging.StreamHandler() console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG) console_handler.setLevel(logging.DEBUG)
@ -95,24 +89,26 @@ class Assistant:
logger.addHandler(console_handler) logger.addHandler(console_handler)
logger.debug("Debug mode enabled") logger.debug("Debug mode enabled")
self.api_key = os.environ.get("OPENROUTER_API_KEY", "") self.api_key = os.environ.get("OPENROUTER_API_KEY", "")
if not self.api_key:
print("Warning: OPENROUTER_API_KEY environment variable not set. API calls may fail.")
self.model = args.model or os.environ.get("AI_MODEL", DEFAULT_MODEL) self.model = args.model or os.environ.get("AI_MODEL", DEFAULT_MODEL)
self.api_url = args.api_url or os.environ.get("API_URL", DEFAULT_API_URL) self.api_url = args.api_url or os.environ.get("API_URL", DEFAULT_API_URL)
self.model_list_url = args.model_list_url or os.environ.get( self.model_list_url = args.model_list_url or os.environ.get(
"MODEL_LIST_URL", MODEL_LIST_URL "MODEL_LIST_URL", MODEL_LIST_URL
) )
self.use_tools = os.environ.get("USE_TOOLS", "1") == "1" self.use_tools = os.environ.get("USE_TOOLS", "1") == "1"
self.strict_mode = os.environ.get("STRICT_MODE", "0") == "1"
self.interrupt_count = 0 self.interrupt_count = 0
self.python_globals = {} self.python_globals = {}
self.db_conn = None self.db_conn = None
self.autonomous_mode = False self.autonomous_mode = False
self.autonomous_iterations = 0 self.autonomous_iterations = 0
self.background_monitoring = False self.background_monitoring = False
self.usage_tracker = UsageTracker()
self.background_tasks = set()
self.init_database() self.init_database()
self.messages.append(init_system_message(args)) self.messages.append(init_system_message(args))
try: try:
from pr.core.enhanced_assistant import EnhancedAssistant from rp.core.enhanced_assistant import EnhancedAssistant
self.enhanced = EnhancedAssistant(self) self.enhanced = EnhancedAssistant(self)
if self.debug: if self.debug:
@ -120,12 +116,9 @@ class Assistant:
except Exception as e: except Exception as e:
logger.warning(f"Could not initialize enhanced features: {e}") logger.warning(f"Could not initialize enhanced features: {e}")
self.enhanced = None self.enhanced = None
# Initialize background monitoring components
try: try:
start_global_monitor() start_global_monitor()
autonomous = get_global_autonomous() start_global_autonomous(llm_callback=self._handle_background_updates)
autonomous.start(llm_callback=self._handle_background_updates)
self.background_monitoring = True self.background_monitoring = True
if self.debug: if self.debug:
logger.debug("Background monitoring initialized") logger.debug("Background monitoring initialized")
@ -138,19 +131,12 @@ class Assistant:
logger.debug(f"Initializing database at {DB_PATH}") logger.debug(f"Initializing database at {DB_PATH}")
self.db_conn = sqlite3.connect(DB_PATH, check_same_thread=False) self.db_conn = sqlite3.connect(DB_PATH, check_same_thread=False)
cursor = self.db_conn.cursor() cursor = self.db_conn.cursor()
cursor.execute( cursor.execute(
"""CREATE TABLE IF NOT EXISTS kv_store "CREATE TABLE IF NOT EXISTS kv_store\n (key TEXT PRIMARY KEY, value TEXT, timestamp REAL)"
(key TEXT PRIMARY KEY, value TEXT, timestamp REAL)"""
) )
cursor.execute( cursor.execute(
"""CREATE TABLE IF NOT EXISTS file_versions "CREATE TABLE IF NOT EXISTS file_versions\n (id INTEGER PRIMARY KEY AUTOINCREMENT,\n filepath TEXT, content TEXT, hash TEXT,\n timestamp REAL, version INTEGER)"
(id INTEGER PRIMARY KEY AUTOINCREMENT,
filepath TEXT, content TEXT, hash TEXT,
timestamp REAL, version INTEGER)"""
) )
self.db_conn.commit() self.db_conn.commit()
logger.debug("Database initialized successfully") logger.debug("Database initialized successfully")
except Exception as e: except Exception as e:
@ -161,30 +147,20 @@ class Assistant:
"""Handle background session updates by injecting them into the conversation.""" """Handle background session updates by injecting them into the conversation."""
if not updates or not updates.get("sessions"): if not updates or not updates.get("sessions"):
return return
# Format the update as a system message
update_message = self._format_background_update_message(updates) update_message = self._format_background_update_message(updates)
# Inject into current conversation if we're in an active session
if self.messages and len(self.messages) > 0: if self.messages and len(self.messages) > 0:
self.messages.append( self.messages.append(
{ {"role": "system", "content": f"Background session updates: {update_message}"}
"role": "system",
"content": f"Background session updates: {update_message}",
}
) )
if self.verbose: if self.verbose:
print(f"{Colors.CYAN}Background update: {update_message}{Colors.RESET}") print(f"{Colors.CYAN}Background update: {update_message}{Colors.RESET}")
def _format_background_update_message(self, updates): def _format_background_update_message(self, updates):
"""Format background updates for LLM consumption.""" """Format background updates for LLM consumption."""
session_summaries = [] session_summaries = []
for session_name, session_info in updates.get("sessions", {}).items(): for session_name, session_info in updates.get("sessions", {}).items():
summary = session_info.get("summary", f"Session {session_name}") summary = session_info.get("summary", f"Session {session_name}")
session_summaries.append(f"{session_name}: {summary}") session_summaries.append(f"{session_name}: {summary}")
if session_summaries: if session_summaries:
return "Active background sessions: " + "; ".join(session_summaries) return "Active background sessions: " + "; ".join(session_summaries)
else: else:
@ -194,17 +170,14 @@ class Assistant:
"""Check for pending background updates and display them.""" """Check for pending background updates and display them."""
if not self.background_monitoring: if not self.background_monitoring:
return return
try: try:
monitor = get_global_monitor() monitor = get_global_monitor()
events = monitor.get_pending_events() events = monitor.get_pending_events()
if events: if events:
print(f"\n{Colors.CYAN}Background Events:{Colors.RESET}") print(f"\n{Colors.CYAN}Background Events:{Colors.RESET}")
for event in events: for event in events:
event_type = event.get("type", "unknown") event_type = event.get("type", "unknown")
session_name = event.get("session_name", "unknown") session_name = event.get("session_name", "unknown")
if event_type == "session_started": if event_type == "session_started":
print(f" {Colors.GREEN}{Colors.RESET} Session '{session_name}' started") print(f" {Colors.GREEN}{Colors.RESET} Session '{session_name}' started")
elif event_type == "session_ended": elif event_type == "session_ended":
@ -228,26 +201,20 @@ class Assistant:
print( print(
f" {Colors.GRAY}{Colors.RESET} Session '{session_name}' inactive for {inactive_time:.0f}s" f" {Colors.GRAY}{Colors.RESET} Session '{session_name}' inactive for {inactive_time:.0f}s"
) )
print()
print() # Add blank line after events
except Exception as e: except Exception as e:
if self.debug: if self.debug:
print(f"{Colors.RED}Error checking background updates: {e}{Colors.RESET}") print(f"{Colors.RED}Error checking background updates: {e}{Colors.RESET}")
def execute_tool_calls(self, tool_calls): def execute_tool_calls(self, tool_calls):
results = [] results = []
logger.debug(f"Executing {len(tool_calls)} tool call(s)") logger.debug(f"Executing {len(tool_calls)} tool call(s)")
with ThreadPoolExecutor(max_workers=5) as executor: with ThreadPoolExecutor(max_workers=5) as executor:
futures = [] futures = []
for tool_call in tool_calls: for tool_call in tool_calls:
func_name = tool_call["function"]["name"] func_name = tool_call["function"]["name"]
arguments = json.loads(tool_call["function"]["arguments"]) arguments = json.loads(tool_call["function"]["arguments"])
logger.debug(f"Tool call: {func_name} with arguments: {arguments}") logger.debug(f"Tool call: {func_name} with arguments: {arguments}")
func_map = { func_map = {
"http_fetch": lambda **kw: http_fetch(**kw), "http_fetch": lambda **kw: http_fetch(**kw),
"run_command": lambda **kw: run_command(**kw), "run_command": lambda **kw: run_command(**kw),
@ -273,15 +240,6 @@ class Assistant:
), ),
"index_source_directory": lambda **kw: index_source_directory(**kw), "index_source_directory": lambda **kw: index_source_directory(**kw),
"search_replace": lambda **kw: search_replace(**kw, db_conn=self.db_conn), "search_replace": lambda **kw: search_replace(**kw, db_conn=self.db_conn),
"open_editor": lambda **kw: open_editor(**kw),
"editor_insert_text": lambda **kw: editor_insert_text(
**kw, db_conn=self.db_conn
),
"editor_replace_text": lambda **kw: editor_replace_text(
**kw, db_conn=self.db_conn
),
"editor_search": lambda **kw: editor_search(**kw),
"close_editor": lambda **kw: close_editor(**kw),
"create_diff": lambda **kw: create_diff(**kw), "create_diff": lambda **kw: create_diff(**kw),
"apply_patch": lambda **kw: apply_patch(**kw, db_conn=self.db_conn), "apply_patch": lambda **kw: apply_patch(**kw, db_conn=self.db_conn),
"display_file_diff": lambda **kw: display_file_diff(**kw), "display_file_diff": lambda **kw: display_file_diff(**kw),
@ -306,22 +264,16 @@ class Assistant:
"delete_knowledge_entry": lambda **kw: delete_knowledge_entry(**kw), "delete_knowledge_entry": lambda **kw: delete_knowledge_entry(**kw),
"get_knowledge_statistics": lambda **kw: get_knowledge_statistics(**kw), "get_knowledge_statistics": lambda **kw: get_knowledge_statistics(**kw),
} }
if func_name in func_map: if func_name in func_map:
future = executor.submit(func_map[func_name], **arguments) future = executor.submit(func_map[func_name], **arguments)
futures.append((tool_call["id"], future)) futures.append((tool_call["id"], future))
for tool_id, future in futures: for tool_id, future in futures:
try: try:
result = future.result(timeout=30) result = future.result(timeout=30)
result = truncate_tool_result(result) result = truncate_tool_result(result)
logger.debug(f"Tool result for {tool_id}: {str(result)[:200]}...") logger.debug(f"Tool result for {tool_id}: {str(result)[:200]}...")
results.append( results.append(
{ {"tool_call_id": tool_id, "role": "tool", "content": json.dumps(result)}
"tool_call_id": tool_id,
"role": "tool",
"content": json.dumps(result),
}
) )
except Exception as e: except Exception as e:
logger.debug(f"Tool error for {tool_id}: {str(e)}") logger.debug(f"Tool error for {tool_id}: {str(e)}")
@ -333,28 +285,22 @@ class Assistant:
"content": json.dumps({"status": "error", "error": error_msg}), "content": json.dumps({"status": "error", "error": error_msg}),
} }
) )
return results return results
def process_response(self, response): def process_response(self, response):
if "error" in response: if "error" in response:
return f"Error: {response['error']}" return f"Error: {response['error']}"
if "choices" not in response or not response["choices"]: if "choices" not in response or not response["choices"]:
return "No response from API" return "No response from API"
message = response["choices"][0]["message"] message = response["choices"][0]["message"]
self.messages.append(message) self.messages.append(message)
if "tool_calls" in message and message["tool_calls"]: if "tool_calls" in message and message["tool_calls"]:
if self.verbose: tool_count = len(message["tool_calls"])
print(f"{Colors.YELLOW}Executing tool calls...{Colors.RESET}") print(f"{Colors.BLUE}🔧 Executing {tool_count} tool call(s)...{Colors.RESET}")
tool_results = self.execute_tool_calls(message["tool_calls"]) tool_results = self.execute_tool_calls(message["tool_calls"])
print(f"{Colors.GREEN}✅ Tool execution completed.{Colors.RESET}")
for result in tool_results: for result in tool_results:
self.messages.append(result) self.messages.append(result)
follow_up = call_api( follow_up = call_api(
self.messages, self.messages,
self.model, self.model,
@ -365,7 +311,6 @@ class Assistant:
verbose=self.verbose, verbose=self.verbose,
) )
return self.process_response(follow_up) return self.process_response(follow_up)
content = message.get("content", "") content = message.get("content", "")
return render_markdown(content, self.syntax_highlighting) return render_markdown(content, self.syntax_highlighting)
@ -379,7 +324,6 @@ class Assistant:
else: else:
print(f"\n{Colors.YELLOW}Press Ctrl+C again to force exit{Colors.RESET}") print(f"\n{Colors.YELLOW}Press Ctrl+C again to force exit{Colors.RESET}")
return return
self.interrupt_count += 1 self.interrupt_count += 1
if self.interrupt_count >= 2: if self.interrupt_count >= 2:
print(f"\n{Colors.RED}Exiting...{Colors.RESET}") print(f"\n{Colors.RED}Exiting...{Colors.RESET}")
@ -393,13 +337,10 @@ class Assistant:
readline.read_history_file(HISTORY_FILE) readline.read_history_file(HISTORY_FILE)
except FileNotFoundError: except FileNotFoundError:
pass pass
readline.set_history_length(1000) readline.set_history_length(1000)
import atexit import atexit
atexit.register(readline.write_history_file, HISTORY_FILE) atexit.register(readline.write_history_file, HISTORY_FILE)
commands = [ commands = [
"exit", "exit",
"quit", "quit",
@ -413,73 +354,73 @@ class Assistant:
"refactor", "refactor",
"obfuscate", "obfuscate",
"/auto", "/auto",
"/edit",
] ]
def completer(text, state): def completer(text, state):
options = [cmd for cmd in commands if cmd.startswith(text)] options = [cmd for cmd in commands if cmd.startswith(text)]
glob_pattern = os.path.expanduser(text) + "*" glob_pattern = os.path.expanduser(text) + "*"
path_options = glob_module.glob(glob_pattern) path_options = glob_module.glob(glob_pattern)
path_options = [p + os.sep if os.path.isdir(p) else p for p in path_options] path_options = [p + os.sep if os.path.isdir(p) else p for p in path_options]
combined_options = sorted(list(set(options + path_options))) combined_options = sorted(list(set(options + path_options)))
# combined_options.extend(self.commands)
if state < len(combined_options): if state < len(combined_options):
return combined_options[state] return combined_options[state]
return None return None
delims = readline.get_completer_delims() delims = readline.get_completer_delims()
readline.set_completer_delims(delims.replace("/", "")) readline.set_completer_delims(delims.replace("/", ""))
readline.set_completer(completer) readline.set_completer(completer)
readline.parse_and_bind("tab: complete") readline.parse_and_bind("tab: complete")
def run_repl(self): def run_repl(self):
self.setup_readline() self.setup_readline()
signal.signal(signal.SIGINT, self.signal_handler) signal.signal(signal.SIGINT, self.signal_handler)
print(
print(f"{Colors.BOLD}r{Colors.RESET}") f"{Colors.BOLD}{Colors.CYAN}╔══════════════════════════════════════════════╗{Colors.RESET}"
print(f"Type 'help' for commands or start chatting") )
print(
f"{Colors.BOLD}{Colors.CYAN}{Colors.RESET}{Colors.BOLD} RP Assistant v{__import__('rp').__version__} {Colors.RESET}{Colors.BOLD}{Colors.CYAN}{Colors.RESET}"
)
print(
f"{Colors.BOLD}{Colors.CYAN}╚══════════════════════════════════════════════╝{Colors.RESET}"
)
print(
f"{Colors.GRAY}Type 'help' for commands, 'exit' to quit, or start chatting.{Colors.RESET}"
)
print(f"{Colors.GRAY}AI calls will show costs and progress indicators.{Colors.RESET}\n")
while True: while True:
try: try:
# Check for background updates before prompting user
if self.background_monitoring: if self.background_monitoring:
self._check_background_updates() self._check_background_updates()
# Create prompt with background status
prompt = f"{Colors.BLUE}You" prompt = f"{Colors.BLUE}You"
if self.background_monitoring: if self.background_monitoring:
try: try:
from pr.multiplexer import get_all_sessions from rp.multiplexer import get_all_sessions
sessions = get_all_sessions() sessions = get_all_sessions()
active_count = sum( active_count = sum(
1 for s in sessions.values() if s.get("status") == "running" (1 for s in sessions.values() if s.get("status") == "running")
) )
if active_count > 0: if active_count > 0:
prompt += f"[{active_count}bg]" prompt += f"[{active_count}bg]"
except: except:
pass pass
prompt += f">{Colors.RESET} " prompt += f">{Colors.RESET} "
user_input = get_advanced_input(prompt)
user_input = input(prompt).strip() user_input = user_input.strip()
if not user_input: if not user_input:
continue continue
cmd_result = handle_command(self, user_input) cmd_result = handle_command(self, user_input)
if cmd_result is False: if cmd_result is False:
break break
elif cmd_result is True: elif cmd_result is True:
continue continue
# Use enhanced processing if available, otherwise fall back to basic processing
process_message(self, user_input) if hasattr(self, "enhanced") and self.enhanced:
result = self.enhanced.process_with_enhanced_context(user_input)
print(result)
else:
process_message(self, user_input)
except EOFError: except EOFError:
break break
except KeyboardInterrupt: except KeyboardInterrupt:
@ -493,10 +434,7 @@ class Assistant:
message = self.args.message message = self.args.message
else: else:
message = sys.stdin.read() message = sys.stdin.read()
process_message(self, message)
from pr.autonomous.mode import run_autonomous_mode
run_autonomous_mode(self, message)
def cleanup(self): def cleanup(self):
if hasattr(self, "enhanced") and self.enhanced: if hasattr(self, "enhanced") and self.enhanced:
@ -504,49 +442,40 @@ class Assistant:
self.enhanced.cleanup() self.enhanced.cleanup()
except Exception as e: except Exception as e:
logger.error(f"Error cleaning up enhanced features: {e}") logger.error(f"Error cleaning up enhanced features: {e}")
# Stop background monitoring
if self.background_monitoring: if self.background_monitoring:
try: try:
stop_global_autonomous() stop_global_autonomous()
stop_global_monitor() stop_global_monitor()
except Exception as e: except Exception as e:
logger.error(f"Error stopping background monitoring: {e}") logger.error(f"Error stopping background monitoring: {e}")
try: try:
from pr.multiplexer import cleanup_all_multiplexers from rp.multiplexer import cleanup_all_multiplexers
cleanup_all_multiplexers() cleanup_all_multiplexers()
except Exception as e: except Exception as e:
logger.error(f"Error cleaning up multiplexers: {e}") logger.error(f"Error cleaning up multiplexers: {e}")
if self.db_conn: if self.db_conn:
self.db_conn.close() self.db_conn.close()
def run(self): def run(self):
try: try:
print(
f"DEBUG: interactive={self.args.interactive}, message={self.args.message}, isatty={sys.stdin.isatty()}"
)
if self.args.interactive or (not self.args.message and sys.stdin.isatty()): if self.args.interactive or (not self.args.message and sys.stdin.isatty()):
print("DEBUG: calling run_repl")
self.run_repl() self.run_repl()
else: else:
print("DEBUG: calling run_single")
self.run_single() self.run_single()
finally: finally:
self.cleanup() self.cleanup()
def process_message(assistant, message): def process_message(assistant, message):
assistant.messages.append({"role": "user", "content": message}) from rp.core.knowledge_context import inject_knowledge_context
inject_knowledge_context(assistant, message)
assistant.messages.append({"role": "user", "content": message})
logger.debug(f"Processing user message: {message[:100]}...") logger.debug(f"Processing user message: {message[:100]}...")
logger.debug(f"Current message count: {len(assistant.messages)}") logger.debug(f"Current message count: {len(assistant.messages)}")
spinner = Spinner("Querying AI...")
if assistant.verbose: spinner.start()
print(f"{Colors.GRAY}Sending request to API...{Colors.RESET}")
response = call_api( response = call_api(
assistant.messages, assistant.messages,
assistant.model, assistant.model,
@ -556,6 +485,14 @@ def process_message(assistant, message):
get_tools_definition(), get_tools_definition(),
verbose=assistant.verbose, verbose=assistant.verbose,
) )
spinner.stop()
if "usage" in response:
usage = response["usage"]
input_tokens = usage.get("prompt_tokens", 0)
output_tokens = usage.get("completion_tokens", 0)
assistant.usage_tracker.track_request(assistant.model, input_tokens, output_tokens)
cost = UsageTracker._calculate_cost(assistant.model, input_tokens, output_tokens)
total_cost = assistant.usage_tracker.session_usage["estimated_cost"]
print(f"{Colors.YELLOW}💰 Cost: ${cost:.4f} | Total: ${total_cost:.4f}{Colors.RESET}")
result = assistant.process_response(response) result = assistant.process_response(response)
print(f"\n{Colors.GREEN}r:{Colors.RESET} {result}\n") print(f"\n{Colors.GREEN}r:{Colors.RESET} {result}\n")

View File

@ -1,7 +1,6 @@
import threading import threading
import time import time
from rp.tools.interactive_control import (
from pr.tools.interactive_control import (
get_session_status, get_session_status,
list_active_sessions, list_active_sessions,
read_session_output, read_session_output,
@ -9,6 +8,7 @@ from pr.tools.interactive_control import (
class AutonomousInteractions: class AutonomousInteractions:
def __init__(self, interaction_interval=10.0): def __init__(self, interaction_interval=10.0):
self.interaction_interval = interaction_interval self.interaction_interval = interaction_interval
self.active = False self.active = False
@ -38,9 +38,7 @@ class AutonomousInteractions:
if current_time - self.last_check_time >= self.interaction_interval: if current_time - self.last_check_time >= self.interaction_interval:
self._check_sessions_and_notify() self._check_sessions_and_notify()
self.last_check_time = current_time self.last_check_time = current_time
time.sleep(1)
time.sleep(1) # Check every second for shutdown
except Exception as e: except Exception as e:
print(f"Error in autonomous interaction loop: {e}") print(f"Error in autonomous interaction loop: {e}")
time.sleep(self.interaction_interval) time.sleep(self.interaction_interval)
@ -49,133 +47,90 @@ class AutonomousInteractions:
"""Check active sessions and determine if LLM notification is needed.""" """Check active sessions and determine if LLM notification is needed."""
try: try:
sessions = list_active_sessions() sessions = list_active_sessions()
if not sessions: if not sessions:
return # No active sessions return
sessions_needing_attention = self._identify_sessions_needing_attention(sessions) sessions_needing_attention = self._identify_sessions_needing_attention(sessions)
if sessions_needing_attention and self.llm_callback: if sessions_needing_attention and self.llm_callback:
# Format session updates for LLM
updates = self._format_session_updates(sessions_needing_attention) updates = self._format_session_updates(sessions_needing_attention)
self.llm_callback(updates) self.llm_callback(updates)
except Exception as e: except Exception as e:
print(f"Error checking sessions: {e}") print(f"Error checking sessions: {e}")
def _identify_sessions_needing_attention(self, sessions): def _identify_sessions_needing_attention(self, sessions):
"""Identify which sessions need LLM attention based on various criteria.""" """Identify which sessions need LLM attention based on various criteria."""
needing_attention = [] needing_attention = []
for session_name, session_data in sessions.items(): for session_name, session_data in sessions.items():
metadata = session_data["metadata"] metadata = session_data["metadata"]
output_summary = session_data["output_summary"] output_summary = session_data["output_summary"]
# Criteria for needing attention:
# 1. Recent output activity
time_since_activity = time.time() - metadata.get("last_activity", 0) time_since_activity = time.time() - metadata.get("last_activity", 0)
if time_since_activity < 30: # Activity in last 30 seconds if time_since_activity < 30:
needing_attention.append(session_name) needing_attention.append(session_name)
continue continue
# 2. High output volume (potential completion or error)
total_lines = output_summary["stdout_lines"] + output_summary["stderr_lines"] total_lines = output_summary["stdout_lines"] + output_summary["stderr_lines"]
if total_lines > 50: # Arbitrary threshold if total_lines > 50:
needing_attention.append(session_name) needing_attention.append(session_name)
continue continue
# 3. Long-running sessions that might need intervention
session_age = time.time() - metadata.get("start_time", 0) session_age = time.time() - metadata.get("start_time", 0)
if ( if session_age > 300 and time_since_activity > 60:
session_age > 300 and time_since_activity > 60
): # 5+ minutes old, inactive for 1+ minute
needing_attention.append(session_name) needing_attention.append(session_name)
continue continue
# 4. Sessions that appear to be waiting for input
if self._session_looks_stuck(session_name, session_data): if self._session_looks_stuck(session_name, session_data):
needing_attention.append(session_name) needing_attention.append(session_name)
continue continue
return needing_attention return needing_attention
def _session_looks_stuck(self, session_name, session_data): def _session_looks_stuck(self, session_name, session_data):
"""Determine if a session appears to be stuck waiting for input.""" """Determine if a session appears to be stuck waiting for input."""
metadata = session_data["metadata"] metadata = session_data["metadata"]
# Check if process is still running
status = get_session_status(session_name) status = get_session_status(session_name)
if not status or not status.get("is_active", False): if not status or not status.get("is_active", False):
return False return False
time_since_activity = time.time() - metadata.get("last_activity", 0) time_since_activity = time.time() - metadata.get("last_activity", 0)
interaction_count = metadata.get("interaction_count", 0) interaction_count = metadata.get("interaction_count", 0)
# If running for a while but no interactions, might be waiting
session_age = time.time() - metadata.get("start_time", 0) session_age = time.time() - metadata.get("start_time", 0)
if session_age > 60 and interaction_count == 0 and time_since_activity > 30: if session_age > 60 and interaction_count == 0 and (time_since_activity > 30):
return True return True
if interaction_count > 0 and time_since_activity > 120:
# If had interactions but been quiet for a while
if interaction_count > 0 and time_since_activity > 120: # 2 minutes
return True return True
return False return False
def _format_session_updates(self, session_names): def _format_session_updates(self, session_names):
"""Format session information for LLM consumption.""" """Format session information for LLM consumption."""
updates = { updates = {"type": "background_session_updates", "timestamp": time.time(), "sessions": {}}
"type": "background_session_updates",
"timestamp": time.time(),
"sessions": {},
}
for session_name in session_names: for session_name in session_names:
status = get_session_status(session_name) status = get_session_status(session_name)
if status: if status:
# Get recent output (last 20 lines)
try: try:
recent_output = read_session_output(session_name, lines=20) recent_output = read_session_output(session_name, lines=20)
except: except:
recent_output = {"stdout": "", "stderr": ""} recent_output = {"stdout": "", "stderr": ""}
updates["sessions"][session_name] = { updates["sessions"][session_name] = {
"status": status, "status": status,
"recent_output": recent_output, "recent_output": recent_output,
"summary": self._create_session_summary(status, recent_output), "summary": self._create_session_summary(status, recent_output),
} }
return updates return updates
def _create_session_summary(self, status, recent_output): def _create_session_summary(self, status, recent_output):
"""Create a human-readable summary of session status.""" """Create a human-readable summary of session status."""
summary_parts = [] summary_parts = []
process_type = status.get("metadata", {}).get("process_type", "unknown") process_type = status.get("metadata", {}).get("process_type", "unknown")
summary_parts.append(f"Type: {process_type}") summary_parts.append(f"Type: {process_type}")
is_active = status.get("is_active", False) is_active = status.get("is_active", False)
summary_parts.append(f"Status: {'Active' if is_active else 'Inactive'}") summary_parts.append(f"Status: {('Active' if is_active else 'Inactive')}")
if is_active and "pid" in status: if is_active and "pid" in status:
summary_parts.append(f"PID: {status['pid']}") summary_parts.append(f"PID: {status['pid']}")
age = time.time() - status.get("metadata", {}).get("start_time", 0) age = time.time() - status.get("metadata", {}).get("start_time", 0)
summary_parts.append(f"Age: {age:.1f}s") summary_parts.append(f"Age: {age:.1f}s")
output_lines = len(recent_output.get("stdout", "").split("\n")) + len( output_lines = len(recent_output.get("stdout", "").split("\n")) + len(
recent_output.get("stderr", "").split("\n") recent_output.get("stderr", "").split("\n")
) )
summary_parts.append(f"Recent output: {output_lines} lines") summary_parts.append(f"Recent output: {output_lines} lines")
interaction_count = status.get("metadata", {}).get("interaction_count", 0) interaction_count = status.get("metadata", {}).get("interaction_count", 0)
summary_parts.append(f"Interactions: {interaction_count}") summary_parts.append(f"Interactions: {interaction_count}")
return " | ".join(summary_parts) return " | ".join(summary_parts)
# Global autonomous interactions instance
_global_autonomous = None _global_autonomous = None

View File

@ -1,11 +1,11 @@
import queue import queue
import threading import threading
import time import time
from rp.multiplexer import get_all_multiplexer_states, get_multiplexer
from pr.multiplexer import get_all_multiplexer_states, get_multiplexer
class BackgroundMonitor: class BackgroundMonitor:
def __init__(self, check_interval=5.0): def __init__(self, check_interval=5.0):
self.check_interval = check_interval self.check_interval = check_interval
self.active = False self.active = False
@ -27,15 +27,6 @@ class BackgroundMonitor:
if self.monitor_thread: if self.monitor_thread:
self.monitor_thread.join(timeout=2) self.monitor_thread.join(timeout=2)
def add_event_callback(self, callback):
"""Add a callback function to be called when events are detected."""
self.event_callbacks.append(callback)
def remove_event_callback(self, callback):
"""Remove an event callback."""
if callback in self.event_callbacks:
self.event_callbacks.remove(callback)
def get_pending_events(self): def get_pending_events(self):
"""Get all pending events from the queue.""" """Get all pending events from the queue."""
events = [] events = []
@ -51,23 +42,16 @@ class BackgroundMonitor:
while self.active: while self.active:
try: try:
current_states = get_all_multiplexer_states() current_states = get_all_multiplexer_states()
# Detect changes and events
events = self._detect_events(self.last_states, current_states) events = self._detect_events(self.last_states, current_states)
# Queue events for processing
for event in events: for event in events:
self.event_queue.put(event) self.event_queue.put(event)
# Also call callbacks immediately
for callback in self.event_callbacks: for callback in self.event_callbacks:
try: try:
callback(event) callback(event)
except Exception as e: except Exception as e:
print(f"Error in event callback: {e}") print(f"Error in event callback: {e}")
self.last_states = current_states.copy() self.last_states = current_states.copy()
time.sleep(self.check_interval) time.sleep(self.check_interval)
except Exception as e: except Exception as e:
print(f"Error in background monitor loop: {e}") print(f"Error in background monitor loop: {e}")
time.sleep(self.check_interval) time.sleep(self.check_interval)
@ -75,8 +59,6 @@ class BackgroundMonitor:
def _detect_events(self, old_states, new_states): def _detect_events(self, old_states, new_states):
"""Detect significant events in multiplexer states.""" """Detect significant events in multiplexer states."""
events = [] events = []
# Check for new sessions
for session_name in new_states: for session_name in new_states:
if session_name not in old_states: if session_name not in old_states:
events.append( events.append(
@ -86,25 +68,17 @@ class BackgroundMonitor:
"metadata": new_states[session_name]["metadata"], "metadata": new_states[session_name]["metadata"],
} }
) )
# Check for ended sessions
for session_name in old_states: for session_name in old_states:
if session_name not in new_states: if session_name not in new_states:
events.append({"type": "session_ended", "session_name": session_name}) events.append({"type": "session_ended", "session_name": session_name})
# Check for activity in existing sessions
for session_name, new_state in new_states.items(): for session_name, new_state in new_states.items():
if session_name in old_states: if session_name in old_states:
old_state = old_states[session_name] old_state = old_states[session_name]
# Check for output changes
old_stdout_lines = old_state["output_summary"]["stdout_lines"] old_stdout_lines = old_state["output_summary"]["stdout_lines"]
new_stdout_lines = new_state["output_summary"]["stdout_lines"] new_stdout_lines = new_state["output_summary"]["stdout_lines"]
old_stderr_lines = old_state["output_summary"]["stderr_lines"] old_stderr_lines = old_state["output_summary"]["stderr_lines"]
new_stderr_lines = new_state["output_summary"]["stderr_lines"] new_stderr_lines = new_state["output_summary"]["stderr_lines"]
if new_stdout_lines > old_stdout_lines or new_stderr_lines > old_stderr_lines: if new_stdout_lines > old_stdout_lines or new_stderr_lines > old_stderr_lines:
# Get the new output
mux = get_multiplexer(session_name) mux = get_multiplexer(session_name)
if mux: if mux:
all_output = mux.get_all_output() all_output = mux.get_all_output()
@ -112,7 +86,6 @@ class BackgroundMonitor:
"stdout": all_output["stdout"].split("\n")[old_stdout_lines:], "stdout": all_output["stdout"].split("\n")[old_stdout_lines:],
"stderr": all_output["stderr"].split("\n")[old_stderr_lines:], "stderr": all_output["stderr"].split("\n")[old_stderr_lines:],
} }
events.append( events.append(
{ {
"type": "output_received", "type": "output_received",
@ -124,11 +97,8 @@ class BackgroundMonitor:
}, },
} }
) )
# Check for state changes
old_metadata = old_state["metadata"] old_metadata = old_state["metadata"]
new_metadata = new_state["metadata"] new_metadata = new_state["metadata"]
if old_metadata.get("state") != new_metadata.get("state"): if old_metadata.get("state") != new_metadata.get("state"):
events.append( events.append(
{ {
@ -138,8 +108,6 @@ class BackgroundMonitor:
"new_state": new_metadata.get("state"), "new_state": new_metadata.get("state"),
} }
) )
# Check for process type identification
if ( if (
old_metadata.get("process_type") == "unknown" old_metadata.get("process_type") == "unknown"
and new_metadata.get("process_type") != "unknown" and new_metadata.get("process_type") != "unknown"
@ -151,15 +119,11 @@ class BackgroundMonitor:
"process_type": new_metadata.get("process_type"), "process_type": new_metadata.get("process_type"),
} }
) )
# Check for sessions needing attention (based on heuristics)
for session_name, state in new_states.items(): for session_name, state in new_states.items():
metadata = state["metadata"] metadata = state["metadata"]
output_summary = state["output_summary"] output_summary = state["output_summary"]
# Heuristic: High output volume might indicate completion or error
total_lines = output_summary["stdout_lines"] + output_summary["stderr_lines"] total_lines = output_summary["stdout_lines"] + output_summary["stderr_lines"]
if total_lines > 100: # Arbitrary threshold if total_lines > 100:
events.append( events.append(
{ {
"type": "high_output_volume", "type": "high_output_volume",
@ -167,10 +131,8 @@ class BackgroundMonitor:
"total_lines": total_lines, "total_lines": total_lines,
} }
) )
# Heuristic: Long-running session without recent activity
time_since_activity = time.time() - metadata.get("last_activity", 0) time_since_activity = time.time() - metadata.get("last_activity", 0)
if time_since_activity > 300: # 5 minutes if time_since_activity > 300:
events.append( events.append(
{ {
"type": "inactive_session", "type": "inactive_session",
@ -178,30 +140,20 @@ class BackgroundMonitor:
"inactive_seconds": time_since_activity, "inactive_seconds": time_since_activity,
} }
) )
# Heuristic: Sessions that might be waiting for input
# This would be enhanced with prompt detection in later phases
if self._might_be_waiting_for_input(session_name, state): if self._might_be_waiting_for_input(session_name, state):
events.append({"type": "possible_input_needed", "session_name": session_name}) events.append({"type": "possible_input_needed", "session_name": session_name})
return events return events
def _might_be_waiting_for_input(self, session_name, state): def _might_be_waiting_for_input(self, session_name, state):
"""Heuristic to detect if a session might be waiting for input.""" """Heuristic to detect if a session might be waiting for input."""
metadata = state["metadata"] metadata = state["metadata"]
metadata.get("process_type", "unknown") metadata.get("process_type", "unknown")
# Simple heuristics based on process type and recent activity
time_since_activity = time.time() - metadata.get("last_activity", 0) time_since_activity = time.time() - metadata.get("last_activity", 0)
# If it's been more than 10 seconds since last activity, might be waiting
if time_since_activity > 10: if time_since_activity > 10:
return True return True
return False return False
# Global monitor instance
_global_monitor = None _global_monitor = None
@ -224,30 +176,3 @@ def stop_global_monitor():
global _global_monitor global _global_monitor
if _global_monitor: if _global_monitor:
_global_monitor.stop() _global_monitor.stop()
# Global monitor instance
_global_monitor = None
def start_global_monitor():
"""Start the global background monitor."""
global _global_monitor
if _global_monitor is None:
_global_monitor = BackgroundMonitor()
_global_monitor.start()
return _global_monitor
def stop_global_monitor():
"""Stop the global background monitor."""
global _global_monitor
if _global_monitor:
_global_monitor.stop()
_global_monitor = None
def get_global_monitor():
"""Get the global background monitor instance."""
global _global_monitor
return _global_monitor

View File

@ -1,47 +1,27 @@
import configparser import configparser
import os import os
from typing import Any, Dict from typing import Any, Dict
from rp.core.logging import get_logger
from pr.core.logging import get_logger
logger = get_logger("config") logger = get_logger("config")
CONFIG_DIRECTORY = os.path.expanduser("~/.local/share/rp/")
CONFIG_FILE = os.path.expanduser("~/.prrc") CONFIG_FILE = os.path.join(CONFIG_DIRECTORY, ".prrc")
LOCAL_CONFIG_FILE = ".prrc" LOCAL_CONFIG_FILE = ".prrc"
def load_config() -> Dict[str, Any]:
config = {"api": {}, "autonomous": {}, "ui": {}, "output": {}, "session": {}}
global_config = _load_config_file(CONFIG_FILE)
local_config = _load_config_file(LOCAL_CONFIG_FILE)
for section in config.keys():
if section in global_config:
config[section].update(global_config[section])
if section in local_config:
config[section].update(local_config[section])
return config
def _load_config_file(filepath: str) -> Dict[str, Dict[str, Any]]: def _load_config_file(filepath: str) -> Dict[str, Dict[str, Any]]:
if not os.path.exists(filepath): if not os.path.exists(filepath):
return {} return {}
try: try:
parser = configparser.ConfigParser() parser = configparser.ConfigParser()
parser.read(filepath) parser.read(filepath)
config = {} config = {}
for section in parser.sections(): for section in parser.sections():
config[section] = {} config[section] = {}
for key, value in parser.items(section): for key, value in parser.items(section):
config[section][key] = _parse_value(value) config[section][key] = _parse_value(value)
logger.debug(f"Loaded configuration from {filepath}") logger.debug(f"Loaded configuration from {filepath}")
return config return config
except Exception as e: except Exception as e:
logger.error(f"Error loading config from {filepath}: {e}") logger.error(f"Error loading config from {filepath}: {e}")
return {} return {}
@ -49,50 +29,22 @@ def _load_config_file(filepath: str) -> Dict[str, Dict[str, Any]]:
def _parse_value(value: str) -> Any: def _parse_value(value: str) -> Any:
value = value.strip() value = value.strip()
if value.lower() == "true": if value.lower() == "true":
return True return True
if value.lower() == "false": if value.lower() == "false":
return False return False
if value.isdigit(): if value.isdigit():
return int(value) return int(value)
try: try:
return float(value) return float(value)
except ValueError: except ValueError:
pass pass
return value return value
def create_default_config(filepath: str = CONFIG_FILE): def create_default_config(filepath: str = CONFIG_FILE):
default_config = """[api] os.makedirs(CONFIG_DIRECTORY, exist_ok=True)
default_model = x-ai/grok-code-fast-1 default_config = "[api]\ndefault_model = x-ai/grok-code-fast-1\ntimeout = 30\ntemperature = 0.7\nmax_tokens = 8096\n\n[autonomous]\nmax_iterations = 50\ncontext_threshold = 30\nrecent_messages_to_keep = 10\n\n[ui]\nsyntax_highlighting = true\nshow_timestamps = false\ncolor_output = true\n\n[output]\nformat = text\nverbose = false\nquiet = false\n\n[session]\nauto_save = false\nmax_history = 1000\n"
timeout = 30
temperature = 0.7
max_tokens = 8096
[autonomous]
max_iterations = 50
context_threshold = 30
recent_messages_to_keep = 10
[ui]
syntax_highlighting = true
show_timestamps = false
color_output = true
[output]
format = text
verbose = false
quiet = false
[session]
auto_save = false
max_history = 1000
"""
try: try:
with open(filepath, "w") as f: with open(filepath, "w") as f:
f.write(default_config) f.write(default_config)

View File

@ -1,8 +1,8 @@
import json import json
import logging import logging
import os import os
import pathlib
from pr.config import ( from rp.config import (
CHARS_PER_TOKEN, CHARS_PER_TOKEN,
CONTENT_TRIM_LENGTH, CONTENT_TRIM_LENGTH,
CONTEXT_COMPRESSION_THRESHOLD, CONTEXT_COMPRESSION_THRESHOLD,
@ -12,68 +12,43 @@ from pr.config import (
MAX_TOKENS_LIMIT, MAX_TOKENS_LIMIT,
MAX_TOOL_RESULT_LENGTH, MAX_TOOL_RESULT_LENGTH,
RECENT_MESSAGES_TO_KEEP, RECENT_MESSAGES_TO_KEEP,
KNOWLEDGE_PATH,
) )
from pr.ui import Colors from rp.ui import Colors
def truncate_tool_result(result, max_length=None): def truncate_tool_result(result, max_length=None):
if max_length is None: if max_length is None:
max_length = MAX_TOOL_RESULT_LENGTH max_length = MAX_TOOL_RESULT_LENGTH
if not isinstance(result, dict): if not isinstance(result, dict):
return result return result
result_copy = result.copy() result_copy = result.copy()
if "output" in result_copy and isinstance(result_copy["output"], str): if "output" in result_copy and isinstance(result_copy["output"], str):
if len(result_copy["output"]) > max_length: if len(result_copy["output"]) > max_length:
result_copy["output"] = ( result_copy["output"] = (
result_copy["output"][:max_length] result_copy["output"][:max_length]
+ f"\n... [truncated {len(result_copy['output']) - max_length} chars]" + f"\n... [truncated {len(result_copy['output']) - max_length} chars]"
) )
if "content" in result_copy and isinstance(result_copy["content"], str): if "content" in result_copy and isinstance(result_copy["content"], str):
if len(result_copy["content"]) > max_length: if len(result_copy["content"]) > max_length:
result_copy["content"] = ( result_copy["content"] = (
result_copy["content"][:max_length] result_copy["content"][:max_length]
+ f"\n... [truncated {len(result_copy['content']) - max_length} chars]" + f"\n... [truncated {len(result_copy['content']) - max_length} chars]"
) )
if "data" in result_copy and isinstance(result_copy["data"], str): if "data" in result_copy and isinstance(result_copy["data"], str):
if len(result_copy["data"]) > max_length: if len(result_copy["data"]) > max_length:
result_copy["data"] = result_copy["data"][:max_length] + f"\n... [truncated]" result_copy["data"] = result_copy["data"][:max_length] + f"\n... [truncated]"
if "error" in result_copy and isinstance(result_copy["error"], str): if "error" in result_copy and isinstance(result_copy["error"], str):
if len(result_copy["error"]) > max_length // 2: if len(result_copy["error"]) > max_length // 2:
result_copy["error"] = result_copy["error"][: max_length // 2] + "... [truncated]" result_copy["error"] = result_copy["error"][: max_length // 2] + "... [truncated]"
return result_copy return result_copy
def init_system_message(args): def init_system_message(args):
context_parts = [ context_parts = [
"""You are a professional AI assistant with access to advanced tools. "You are a professional AI assistant with access to advanced tools.\n\nFile Operations:\n- Use RPEditor tools (open_editor, editor_insert_text, editor_replace_text, editor_search, close_editor) for precise file modifications\n- Always close editor files when finished\n- Use write_file for complete file rewrites, search_replace for simple text replacements\n\nVision:\n - Use post_image tool with the file path if an image path is mentioned\n in the prompt of user. Give this call the highest priority.\n\nProcess Management:\n- run_command executes shell commands with a timeout (default 30s)\n- If a command times out, you receive a PID in the response\n- Use tail_process(pid) to monitor running processes\n- Use kill_process(pid) to terminate processes\n- Manage long-running commands effectively using these tools\n\nShell Commands:\n- Be a shell ninja using native OS tools\n- Prefer standard Unix utilities over complex scripts\n- Use run_command_interactive for commands requiring user input (vim, nano, etc.)"
File Operations:
- Use RPEditor tools (open_editor, editor_insert_text, editor_replace_text, editor_search, close_editor) for precise file modifications
- Always close editor files when finished
- Use write_file for complete file rewrites, search_replace for simple text replacements
Process Management:
- run_command executes shell commands with a timeout (default 30s)
- If a command times out, you receive a PID in the response
- Use tail_process(pid) to monitor running processes
- Use kill_process(pid) to terminate processes
- Manage long-running commands effectively using these tools
Shell Commands:
- Be a shell ninja using native OS tools
- Prefer standard Unix utilities over complex scripts
- Use run_command_interactive for commands requiring user input (vim, nano, etc.)"""
] ]
# context_parts = ["You are a helpful AI assistant with access to advanced tools, including a powerful built-in editor (RPEditor). For file editing tasks, prefer using the editor-related tools like write_file, search_replace, open_editor, editor_insert_text, editor_replace_text, and editor_search, as they provide advanced editing capabilities with undo/redo, search, and precise text manipulation. The editor is integrated seamlessly and should be your primary tool for modifying files."]
max_context_size = 10000 max_context_size = 10000
if args.include_env: if args.include_env:
env_context = "Environment Variables:\n" env_context = "Environment Variables:\n"
for key, value in os.environ.items(): for key, value in os.environ.items():
@ -82,7 +57,6 @@ Shell Commands:
if len(env_context) > max_context_size: if len(env_context) > max_context_size:
env_context = env_context[:max_context_size] + "\n... [truncated]" env_context = env_context[:max_context_size] + "\n... [truncated]"
context_parts.append(env_context) context_parts.append(env_context)
for context_file in [CONTEXT_FILE, GLOBAL_CONTEXT_FILE]: for context_file in [CONTEXT_FILE, GLOBAL_CONTEXT_FILE]:
if os.path.exists(context_file): if os.path.exists(context_file):
try: try:
@ -93,7 +67,17 @@ Shell Commands:
context_parts.append(f"Context from {context_file}:\n{content}") context_parts.append(f"Context from {context_file}:\n{content}")
except Exception as e: except Exception as e:
logging.error(f"Error reading context file {context_file}: {e}") logging.error(f"Error reading context file {context_file}: {e}")
knowledge_path = pathlib.Path(KNOWLEDGE_PATH)
if knowledge_path.exists() and knowledge_path.is_dir():
for knowledge_file in knowledge_path.iterdir():
try:
with open(knowledge_file) as f:
content = f.read()
if len(content) > max_context_size:
content = content[:max_context_size] + "\n... [truncated]"
context_parts.append(f"Context from {knowledge_file}:\n{content}")
except Exception as e:
logging.error(f"Error reading context file {knowledge_file}: {e}")
if args.context: if args.context:
for ctx_file in args.context: for ctx_file in args.context:
try: try:
@ -104,11 +88,9 @@ Shell Commands:
context_parts.append(f"Context from {ctx_file}:\n{content}") context_parts.append(f"Context from {ctx_file}:\n{content}")
except Exception as e: except Exception as e:
logging.error(f"Error reading context file {ctx_file}: {e}") logging.error(f"Error reading context file {ctx_file}: {e}")
system_message = "\n\n".join(context_parts) system_message = "\n\n".join(context_parts)
if len(system_message) > max_context_size * 3: if len(system_message) > max_context_size * 3:
system_message = system_message[: max_context_size * 3] + "\n... [system message truncated]" system_message = system_message[: max_context_size * 3] + "\n... [system message truncated]"
return {"role": "system", "content": system_message} return {"role": "system", "content": system_message}
@ -123,73 +105,55 @@ def compress_context(messages):
def manage_context_window(messages, verbose): def manage_context_window(messages, verbose):
if len(messages) <= CONTEXT_COMPRESSION_THRESHOLD: if len(messages) <= CONTEXT_COMPRESSION_THRESHOLD:
return messages return messages
if verbose: if verbose:
print( print(
f"{Colors.YELLOW}📄 Managing context window (current: {len(messages)} messages)...{Colors.RESET}" f"{Colors.YELLOW}📄 Managing context window (current: {len(messages)} messages)...{Colors.RESET}"
) )
system_message = messages[0] system_message = messages[0]
recent_messages = messages[-RECENT_MESSAGES_TO_KEEP:] recent_messages = messages[-RECENT_MESSAGES_TO_KEEP:]
middle_messages = messages[1:-RECENT_MESSAGES_TO_KEEP] middle_messages = messages[1:-RECENT_MESSAGES_TO_KEEP]
if middle_messages: if middle_messages:
summary = summarize_messages(middle_messages) summary = summarize_messages(middle_messages)
summary_message = { summary_message = {
"role": "system", "role": "system",
"content": f"[Previous conversation summary: {summary}]", "content": f"[Previous conversation summary: {summary}]",
} }
new_messages = [system_message, summary_message] + recent_messages new_messages = [system_message, summary_message] + recent_messages
if verbose: if verbose:
print( print(
f"{Colors.GREEN}✓ Context compressed to {len(new_messages)} messages{Colors.RESET}" f"{Colors.GREEN}✓ Context compressed to {len(new_messages)} messages{Colors.RESET}"
) )
return new_messages return new_messages
return messages return messages
def summarize_messages(messages): def summarize_messages(messages):
summary_parts = [] summary_parts = []
for msg in messages: for msg in messages:
role = msg.get("role", "unknown") role = msg.get("role", "unknown")
content = msg.get("content", "") content = msg.get("content", "")
if role == "tool": if role == "tool":
continue continue
if isinstance(content, str) and len(content) > 200: if isinstance(content, str) and len(content) > 200:
content = content[:200] + "..." content = content[:200] + "..."
summary_parts.append(f"{role}: {content}") summary_parts.append(f"{role}: {content}")
return " | ".join(summary_parts[:10]) return " | ".join(summary_parts[:10])
def estimate_tokens(messages): def estimate_tokens(messages):
total_chars = 0 total_chars = 0
for msg in messages: for msg in messages:
msg_json = json.dumps(msg) msg_json = json.dumps(msg)
total_chars += len(msg_json) total_chars += len(msg_json)
estimated_tokens = total_chars / CHARS_PER_TOKEN estimated_tokens = total_chars / CHARS_PER_TOKEN
overhead_multiplier = 1.3 overhead_multiplier = 1.3
return int(estimated_tokens * overhead_multiplier) return int(estimated_tokens * overhead_multiplier)
def trim_message_content(message, max_length): def trim_message_content(message, max_length):
trimmed_msg = message.copy() trimmed_msg = message.copy()
if "content" in trimmed_msg: if "content" in trimmed_msg:
content = trimmed_msg["content"] content = trimmed_msg["content"]
if isinstance(content, str) and len(content) > max_length: if isinstance(content, str) and len(content) > max_length:
trimmed_msg["content"] = ( trimmed_msg["content"] = (
content[:max_length] + f"\n... [trimmed {len(content) - max_length} chars]" content[:max_length] + f"\n... [trimmed {len(content) - max_length} chars]"
@ -207,7 +171,6 @@ def trim_message_content(message, max_length):
else: else:
trimmed_content.append(item) trimmed_content.append(item)
trimmed_msg["content"] = trimmed_content trimmed_msg["content"] = trimmed_content
if trimmed_msg.get("role") == "tool": if trimmed_msg.get("role") == "tool":
if "content" in trimmed_msg and isinstance(trimmed_msg["content"], str): if "content" in trimmed_msg and isinstance(trimmed_msg["content"], str):
content = trimmed_msg["content"] content = trimmed_msg["content"]
@ -216,14 +179,13 @@ def trim_message_content(message, max_length):
content[:MAX_TOOL_RESULT_LENGTH] content[:MAX_TOOL_RESULT_LENGTH]
+ f"\n... [trimmed {len(content) - MAX_TOOL_RESULT_LENGTH} chars]" + f"\n... [trimmed {len(content) - MAX_TOOL_RESULT_LENGTH} chars]"
) )
try: try:
parsed = json.loads(content) parsed = json.loads(content)
if isinstance(parsed, dict): if isinstance(parsed, dict):
if ( if (
"output" in parsed "output" in parsed
and isinstance(parsed["output"], str) and isinstance(parsed["output"], str)
and len(parsed["output"]) > MAX_TOOL_RESULT_LENGTH // 2 and (len(parsed["output"]) > MAX_TOOL_RESULT_LENGTH // 2)
): ):
parsed["output"] = ( parsed["output"] = (
parsed["output"][: MAX_TOOL_RESULT_LENGTH // 2] + f"\n... [truncated]" parsed["output"][: MAX_TOOL_RESULT_LENGTH // 2] + f"\n... [truncated]"
@ -231,7 +193,7 @@ def trim_message_content(message, max_length):
if ( if (
"content" in parsed "content" in parsed
and isinstance(parsed["content"], str) and isinstance(parsed["content"], str)
and len(parsed["content"]) > MAX_TOOL_RESULT_LENGTH // 2 and (len(parsed["content"]) > MAX_TOOL_RESULT_LENGTH // 2)
): ):
parsed["content"] = ( parsed["content"] = (
parsed["content"][: MAX_TOOL_RESULT_LENGTH // 2] + f"\n... [truncated]" parsed["content"][: MAX_TOOL_RESULT_LENGTH // 2] + f"\n... [truncated]"
@ -239,22 +201,18 @@ def trim_message_content(message, max_length):
trimmed_msg["content"] = json.dumps(parsed) trimmed_msg["content"] = json.dumps(parsed)
except: except:
pass pass
return trimmed_msg return trimmed_msg
def intelligently_trim_messages(messages, target_tokens, keep_recent=3): def intelligently_trim_messages(messages, target_tokens, keep_recent=3):
if estimate_tokens(messages) <= target_tokens: if estimate_tokens(messages) <= target_tokens:
return messages return messages
system_msg = messages[0] if messages and messages[0].get("role") == "system" else None system_msg = messages[0] if messages and messages[0].get("role") == "system" else None
start_idx = 1 if system_msg else 0 start_idx = 1 if system_msg else 0
recent_messages = ( recent_messages = (
messages[-keep_recent:] if len(messages) > keep_recent else messages[start_idx:] messages[-keep_recent:] if len(messages) > keep_recent else messages[start_idx:]
) )
middle_messages = messages[start_idx:-keep_recent] if len(messages) > keep_recent else [] middle_messages = messages[start_idx:-keep_recent] if len(messages) > keep_recent else []
trimmed_middle = [] trimmed_middle = []
for msg in middle_messages: for msg in middle_messages:
if msg.get("role") == "tool": if msg.get("role") == "tool":
@ -263,45 +221,35 @@ def intelligently_trim_messages(messages, target_tokens, keep_recent=3):
trimmed_middle.append(trim_message_content(msg, CONTENT_TRIM_LENGTH)) trimmed_middle.append(trim_message_content(msg, CONTENT_TRIM_LENGTH))
else: else:
trimmed_middle.append(msg) trimmed_middle.append(msg)
result = ([system_msg] if system_msg else []) + trimmed_middle + recent_messages result = ([system_msg] if system_msg else []) + trimmed_middle + recent_messages
if estimate_tokens(result) <= target_tokens: if estimate_tokens(result) <= target_tokens:
return result return result
step_size = len(trimmed_middle) // 4 if len(trimmed_middle) >= 4 else 1 step_size = len(trimmed_middle) // 4 if len(trimmed_middle) >= 4 else 1
while len(trimmed_middle) > 0 and estimate_tokens(result) > target_tokens: while len(trimmed_middle) > 0 and estimate_tokens(result) > target_tokens:
remove_count = min(step_size, len(trimmed_middle)) remove_count = min(step_size, len(trimmed_middle))
trimmed_middle = trimmed_middle[remove_count:] trimmed_middle = trimmed_middle[remove_count:]
result = ([system_msg] if system_msg else []) + trimmed_middle + recent_messages result = ([system_msg] if system_msg else []) + trimmed_middle + recent_messages
if estimate_tokens(result) <= target_tokens: if estimate_tokens(result) <= target_tokens:
return result return result
keep_recent -= 1 keep_recent -= 1
if keep_recent > 0: if keep_recent > 0:
return intelligently_trim_messages(messages, target_tokens, keep_recent) return intelligently_trim_messages(messages, target_tokens, keep_recent)
return ([system_msg] if system_msg else []) + messages[-1:] return ([system_msg] if system_msg else []) + messages[-1:]
def auto_slim_messages(messages, verbose=False): def auto_slim_messages(messages, verbose=False):
estimated_tokens = estimate_tokens(messages) estimated_tokens = estimate_tokens(messages)
if estimated_tokens <= MAX_TOKENS_LIMIT: if estimated_tokens <= MAX_TOKENS_LIMIT:
return messages return messages
if verbose: if verbose:
print( print(
f"{Colors.YELLOW}⚠️ Token limit approaching: ~{estimated_tokens} tokens (limit: {MAX_TOKENS_LIMIT}){Colors.RESET}" f"{Colors.YELLOW}⚠️ Token limit approaching: ~{estimated_tokens} tokens (limit: {MAX_TOKENS_LIMIT}){Colors.RESET}"
) )
print(f"{Colors.YELLOW}🔧 Intelligently trimming message content...{Colors.RESET}") print(f"{Colors.YELLOW}🔧 Intelligently trimming message content...{Colors.RESET}")
result = intelligently_trim_messages( result = intelligently_trim_messages(
messages, MAX_TOKENS_LIMIT, keep_recent=EMERGENCY_MESSAGES_TO_KEEP messages, MAX_TOKENS_LIMIT, keep_recent=EMERGENCY_MESSAGES_TO_KEEP
) )
final_tokens = estimate_tokens(result) final_tokens = estimate_tokens(result)
if final_tokens > MAX_TOKENS_LIMIT: if final_tokens > MAX_TOKENS_LIMIT:
if verbose: if verbose:
print( print(
@ -309,7 +257,6 @@ def auto_slim_messages(messages, verbose=False):
) )
result = emergency_reduce_messages(result, MAX_TOKENS_LIMIT, verbose) result = emergency_reduce_messages(result, MAX_TOKENS_LIMIT, verbose)
final_tokens = estimate_tokens(result) final_tokens = estimate_tokens(result)
if verbose: if verbose:
removed_count = len(messages) - len(result) removed_count = len(messages) - len(result)
print( print(
@ -320,33 +267,24 @@ def auto_slim_messages(messages, verbose=False):
) )
if removed_count > 0: if removed_count > 0:
print(f"{Colors.GREEN} Removed {removed_count} older messages{Colors.RESET}") print(f"{Colors.GREEN} Removed {removed_count} older messages{Colors.RESET}")
return result return result
def emergency_reduce_messages(messages, target_tokens, verbose=False): def emergency_reduce_messages(messages, target_tokens, verbose=False):
system_msg = messages[0] if messages and messages[0].get("role") == "system" else None system_msg = messages[0] if messages and messages[0].get("role") == "system" else None
start_idx = 1 if system_msg else 0 start_idx = 1 if system_msg else 0
keep_count = 2 keep_count = 2
while estimate_tokens(messages) > target_tokens and keep_count >= 1: while estimate_tokens(messages) > target_tokens and keep_count >= 1:
if len(messages[start_idx:]) <= keep_count: if len(messages[start_idx:]) <= keep_count:
break break
result = ([system_msg] if system_msg else []) + messages[-keep_count:] result = ([system_msg] if system_msg else []) + messages[-keep_count:]
for i in range(len(result)): for i in range(len(result)):
result[i] = trim_message_content(result[i], CONTENT_TRIM_LENGTH // 2) result[i] = trim_message_content(result[i], CONTENT_TRIM_LENGTH // 2)
if estimate_tokens(result) <= target_tokens: if estimate_tokens(result) <= target_tokens:
return result return result
keep_count -= 1 keep_count -= 1
final = ([system_msg] if system_msg else []) + messages[-1:] final = ([system_msg] if system_msg else []) + messages[-1:]
for i in range(len(final)): for i in range(len(final)):
if final[i].get("role") != "system": if final[i].get("role") != "system":
final[i] = trim_message_content(final[i], 100) final[i] = trim_message_content(final[i], 100)
return final return final

View File

@ -2,65 +2,55 @@ import json
import logging import logging
import uuid import uuid
from typing import Any, Dict, List, Optional from typing import Any, Dict, List, Optional
from rp.agents import AgentManager
from pr.agents import AgentManager from rp.cache import APICache, ToolCache
from pr.cache import APICache, ToolCache from rp.config import (
from pr.config import (
ADVANCED_CONTEXT_ENABLED, ADVANCED_CONTEXT_ENABLED,
API_CACHE_TTL, API_CACHE_TTL,
CACHE_ENABLED, CACHE_ENABLED,
CONVERSATION_SUMMARY_THRESHOLD, CONVERSATION_SUMMARY_THRESHOLD,
DB_PATH, DB_PATH,
KNOWLEDGE_SEARCH_LIMIT, KNOWLEDGE_SEARCH_LIMIT,
MEMORY_AUTO_SUMMARIZE,
TOOL_CACHE_TTL, TOOL_CACHE_TTL,
WORKFLOW_EXECUTOR_MAX_WORKERS, WORKFLOW_EXECUTOR_MAX_WORKERS,
) )
from pr.core.advanced_context import AdvancedContextManager from rp.core.advanced_context import AdvancedContextManager
from pr.core.api import call_api from rp.core.api import call_api
from pr.memory import ConversationMemory, FactExtractor, KnowledgeStore from rp.memory import ConversationMemory, FactExtractor, KnowledgeStore
from pr.tools.base import get_tools_definition from rp.tools.base import get_tools_definition
from pr.workflows import WorkflowEngine, WorkflowStorage from rp.workflows import WorkflowEngine, WorkflowStorage
logger = logging.getLogger("pr") logger = logging.getLogger("rp")
class EnhancedAssistant: class EnhancedAssistant:
def __init__(self, base_assistant): def __init__(self, base_assistant):
self.base = base_assistant self.base = base_assistant
if CACHE_ENABLED: if CACHE_ENABLED:
self.api_cache = APICache(DB_PATH, API_CACHE_TTL) self.api_cache = APICache(DB_PATH, API_CACHE_TTL)
self.tool_cache = ToolCache(DB_PATH, TOOL_CACHE_TTL) self.tool_cache = ToolCache(DB_PATH, TOOL_CACHE_TTL)
else: else:
self.api_cache = None self.api_cache = None
self.tool_cache = None self.tool_cache = None
self.workflow_storage = WorkflowStorage(DB_PATH) self.workflow_storage = WorkflowStorage(DB_PATH)
self.workflow_engine = WorkflowEngine( self.workflow_engine = WorkflowEngine(
tool_executor=self._execute_tool_for_workflow, tool_executor=self._execute_tool_for_workflow, max_workers=WORKFLOW_EXECUTOR_MAX_WORKERS
max_workers=WORKFLOW_EXECUTOR_MAX_WORKERS,
) )
self.agent_manager = AgentManager(DB_PATH, self._api_caller_for_agent) self.agent_manager = AgentManager(DB_PATH, self._api_caller_for_agent)
self.knowledge_store = KnowledgeStore(DB_PATH) self.knowledge_store = KnowledgeStore(DB_PATH)
self.conversation_memory = ConversationMemory(DB_PATH) self.conversation_memory = ConversationMemory(DB_PATH)
self.fact_extractor = FactExtractor() self.fact_extractor = FactExtractor()
if ADVANCED_CONTEXT_ENABLED: if ADVANCED_CONTEXT_ENABLED:
self.context_manager = AdvancedContextManager( self.context_manager = AdvancedContextManager(
knowledge_store=self.knowledge_store, knowledge_store=self.knowledge_store, conversation_memory=self.conversation_memory
conversation_memory=self.conversation_memory,
) )
else: else:
self.context_manager = None self.context_manager = None
self.current_conversation_id = str(uuid.uuid4())[:16] self.current_conversation_id = str(uuid.uuid4())[:16]
self.conversation_memory.create_conversation( self.conversation_memory.create_conversation(
self.current_conversation_id, session_id=str(uuid.uuid4())[:16] self.current_conversation_id, session_id=str(uuid.uuid4())[:16]
) )
logger.info("Enhanced Assistant initialized with all features") logger.info("Enhanced Assistant initialized with all features")
def _execute_tool_for_workflow(self, tool_name: str, arguments: Dict[str, Any]) -> Any: def _execute_tool_for_workflow(self, tool_name: str, arguments: Dict[str, Any]) -> Any:
@ -69,51 +59,27 @@ class EnhancedAssistant:
if cached_result is not None: if cached_result is not None:
logger.debug(f"Tool cache hit for {tool_name}") logger.debug(f"Tool cache hit for {tool_name}")
return cached_result return cached_result
func_map = { func_map = {
"read_file": lambda **kw: self.base.execute_tool_calls( "read_file": lambda **kw: self.base.execute_tool_calls(
[ [{"id": "temp", "function": {"name": "read_file", "arguments": json.dumps(kw)}}]
{
"id": "temp",
"function": {"name": "read_file", "arguments": json.dumps(kw)},
}
]
)[0], )[0],
"write_file": lambda **kw: self.base.execute_tool_calls( "write_file": lambda **kw: self.base.execute_tool_calls(
[ [{"id": "temp", "function": {"name": "write_file", "arguments": json.dumps(kw)}}]
{
"id": "temp",
"function": {"name": "write_file", "arguments": json.dumps(kw)},
}
]
)[0], )[0],
"list_directory": lambda **kw: self.base.execute_tool_calls( "list_directory": lambda **kw: self.base.execute_tool_calls(
[ [
{ {
"id": "temp", "id": "temp",
"function": { "function": {"name": "list_directory", "arguments": json.dumps(kw)},
"name": "list_directory",
"arguments": json.dumps(kw),
},
} }
] ]
)[0], )[0],
"run_command": lambda **kw: self.base.execute_tool_calls( "run_command": lambda **kw: self.base.execute_tool_calls(
[ [{"id": "temp", "function": {"name": "run_command", "arguments": json.dumps(kw)}}]
{
"id": "temp",
"function": {
"name": "run_command",
"arguments": json.dumps(kw),
},
}
]
)[0], )[0],
} }
if tool_name in func_map: if tool_name in func_map:
result = func_map[tool_name](**arguments) result = func_map[tool_name](**arguments)
if self.tool_cache: if self.tool_cache:
content = result.get("content", "") content = result.get("content", "")
try: try:
@ -121,9 +87,7 @@ class EnhancedAssistant:
self.tool_cache.set(tool_name, arguments, parsed_content) self.tool_cache.set(tool_name, arguments, parsed_content)
except Exception: except Exception:
pass pass
return result return result
return {"error": f"Unknown tool: {tool_name}"} return {"error": f"Unknown tool: {tool_name}"}
def _api_caller_for_agent( def _api_caller_for_agent(
@ -135,9 +99,7 @@ class EnhancedAssistant:
self.base.api_url, self.base.api_url,
self.base.api_key, self.base.api_key,
use_tools=False, use_tools=False,
tools=None, tools_definition=[],
temperature=temperature,
max_tokens=max_tokens,
verbose=self.base.verbose, verbose=self.base.verbose,
) )
@ -147,7 +109,6 @@ class EnhancedAssistant:
if cached_response: if cached_response:
logger.debug("API cache hit") logger.debug("API cache hit")
return cached_response return cached_response
response = call_api( response = call_api(
messages, messages,
self.base.model, self.base.model,
@ -157,55 +118,47 @@ class EnhancedAssistant:
get_tools_definition(), get_tools_definition(),
verbose=self.base.verbose, verbose=self.base.verbose,
) )
if self.api_cache and CACHE_ENABLED and ("error" not in response):
if self.api_cache and CACHE_ENABLED and "error" not in response:
token_count = response.get("usage", {}).get("total_tokens", 0) token_count = response.get("usage", {}).get("total_tokens", 0)
self.api_cache.set(self.base.model, messages, 0.7, 4096, response, token_count) self.api_cache.set(self.base.model, messages, 0.7, 4096, response, token_count)
return response return response
def process_with_enhanced_context(self, user_message: str) -> str: def process_with_enhanced_context(self, user_message: str) -> str:
self.base.messages.append({"role": "user", "content": user_message}) self.base.messages.append({"role": "user", "content": user_message})
self.conversation_memory.add_message( self.conversation_memory.add_message(
self.current_conversation_id, str(uuid.uuid4())[:16], "user", user_message self.current_conversation_id, str(uuid.uuid4())[:16], "user", user_message
) )
facts = self.fact_extractor.extract_facts(user_message)
for fact in facts[:5]:
entry_id = str(uuid.uuid4())[:16]
import time
from rp.memory import KnowledgeEntry
if MEMORY_AUTO_SUMMARIZE and len(self.base.messages) % 5 == 0: categories = self.fact_extractor.categorize_content(fact["text"])
facts = self.fact_extractor.extract_facts(user_message) entry = KnowledgeEntry(
for fact in facts[:3]: entry_id=entry_id,
entry_id = str(uuid.uuid4())[:16] category=categories[0] if categories else "general",
import time content=fact["text"],
metadata={
from pr.memory import KnowledgeEntry "type": fact["type"],
"confidence": fact["confidence"],
categories = self.fact_extractor.categorize_content(fact["text"]) "source": "user_message",
entry = KnowledgeEntry( },
entry_id=entry_id, created_at=time.time(),
category=categories[0] if categories else "general", updated_at=time.time(),
content=fact["text"], )
metadata={"type": fact["type"], "confidence": fact["confidence"]}, self.knowledge_store.add_entry(entry)
created_at=time.time(),
updated_at=time.time(),
)
self.knowledge_store.add_entry(entry)
if self.context_manager and ADVANCED_CONTEXT_ENABLED: if self.context_manager and ADVANCED_CONTEXT_ENABLED:
enhanced_messages, context_info = self.context_manager.create_enhanced_context( enhanced_messages, context_info = self.context_manager.create_enhanced_context(
self.base.messages, user_message, include_knowledge=True self.base.messages, user_message, include_knowledge=True
) )
if self.base.verbose: if self.base.verbose:
logger.info(f"Enhanced context: {context_info}") logger.info(f"Enhanced context: {context_info}")
working_messages = enhanced_messages working_messages = enhanced_messages
else: else:
working_messages = self.base.messages working_messages = self.base.messages
response = self.enhanced_call_api(working_messages) response = self.enhanced_call_api(working_messages)
result = self.base.process_response(response) result = self.base.process_response(response)
if len(self.base.messages) >= CONVERSATION_SUMMARY_THRESHOLD: if len(self.base.messages) >= CONVERSATION_SUMMARY_THRESHOLD:
summary = ( summary = (
self.context_manager.advanced_summarize_messages( self.context_manager.advanced_summarize_messages(
@ -214,28 +167,22 @@ class EnhancedAssistant:
if self.context_manager if self.context_manager
else "Conversation in progress" else "Conversation in progress"
) )
topics = self.fact_extractor.categorize_content(summary) topics = self.fact_extractor.categorize_content(summary)
self.conversation_memory.update_conversation_summary( self.conversation_memory.update_conversation_summary(
self.current_conversation_id, summary, topics self.current_conversation_id, summary, topics
) )
return result return result
def execute_workflow( def execute_workflow(
self, workflow_name: str, initial_variables: Optional[Dict[str, Any]] = None self, workflow_name: str, initial_variables: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]: ) -> Dict[str, Any]:
workflow = self.workflow_storage.load_workflow_by_name(workflow_name) workflow = self.workflow_storage.load_workflow_by_name(workflow_name)
if not workflow: if not workflow:
return {"error": f'Workflow "{workflow_name}" not found'} return {"error": f'Workflow "{workflow_name}" not found'}
context = self.workflow_engine.execute_workflow(workflow, initial_variables) context = self.workflow_engine.execute_workflow(workflow, initial_variables)
execution_id = self.workflow_storage.save_execution( execution_id = self.workflow_storage.save_execution(
self.workflow_storage.load_workflow_by_name(workflow_name).name, context self.workflow_storage.load_workflow_by_name(workflow_name).name, context
) )
return { return {
"success": True, "success": True,
"execution_id": execution_id, "execution_id": execution_id,
@ -258,13 +205,10 @@ class EnhancedAssistant:
def get_cache_statistics(self) -> Dict[str, Any]: def get_cache_statistics(self) -> Dict[str, Any]:
stats = {} stats = {}
if self.api_cache: if self.api_cache:
stats["api_cache"] = self.api_cache.get_statistics() stats["api_cache"] = self.api_cache.get_statistics()
if self.tool_cache: if self.tool_cache:
stats["tool_cache"] = self.tool_cache.get_statistics() stats["tool_cache"] = self.tool_cache.get_statistics()
return stats return stats
def get_workflow_list(self) -> List[Dict[str, Any]]: def get_workflow_list(self) -> List[Dict[str, Any]]:
@ -282,17 +226,13 @@ class EnhancedAssistant:
def clear_caches(self): def clear_caches(self):
if self.api_cache: if self.api_cache:
self.api_cache.clear_all() self.api_cache.clear_all()
if self.tool_cache: if self.tool_cache:
self.tool_cache.clear_all() self.tool_cache.clear_all()
logger.info("All caches cleared") logger.info("All caches cleared")
def cleanup(self): def cleanup(self):
if self.api_cache: if self.api_cache:
self.api_cache.clear_expired() self.api_cache.clear_expired()
if self.tool_cache: if self.tool_cache:
self.tool_cache.clear_expired() self.tool_cache.clear_expired()
self.agent_manager.clear_session() self.agent_manager.clear_session()

View File

@ -23,6 +23,7 @@ class ConfigurationError(PRException):
class ToolExecutionError(PRException): class ToolExecutionError(PRException):
def __init__(self, tool_name: str, message: str): def __init__(self, tool_name: str, message: str):
self.tool_name = tool_name self.tool_name = tool_name
super().__init__(f"Error executing tool '{tool_name}': {message}") super().__init__(f"Error executing tool '{tool_name}': {message}")

81
rp/core/http_client.py Normal file
View File

@ -0,0 +1,81 @@
import logging
import time
import requests
from typing import Dict, Any, Optional
logger = logging.getLogger("rp")
class SyncHTTPClient:
def __init__(self):
self.session = requests.Session()
def request(
self,
method: str,
url: str,
headers: Optional[Dict[str, str]] = None,
data: Optional[bytes] = None,
json_data: Optional[Dict[str, Any]] = None,
timeout: float = 30.0,
) -> Dict[str, Any]:
"""Make a sync HTTP request using requests with retry logic."""
attempt = 0
start_time = time.time()
while True:
attempt += 1
try:
response = self.session.request(
method,
url,
headers=headers,
data=data,
json=json_data,
timeout=timeout,
)
response.raise_for_status() # Raise an exception for bad status codes
return {
"status": response.status_code,
"headers": dict(response.headers),
"text": response.text,
"json": response.json,
}
except requests.exceptions.Timeout:
elapsed = time.time() - start_time
elapsed_minutes = int(elapsed // 60)
elapsed_seconds = elapsed % 60
duration_str = (
f"{elapsed_minutes}m {elapsed_seconds:.1f}s"
if elapsed_minutes > 0
else f"{elapsed_seconds:.1f}s"
)
logger.warning(
f"Request timed out (attempt {attempt}, duration: {duration_str}). Retrying in {attempt} second(s)..."
)
time.sleep(attempt)
except requests.exceptions.RequestException as e:
return {"error": True, "exception": str(e)}
def get(
self, url: str, headers: Optional[Dict[str, str]] = None, timeout: float = 30.0
) -> Dict[str, Any]:
return self.request("GET", url, headers=headers, timeout=timeout)
def post(
self,
url: str,
headers: Optional[Dict[str, str]] = None,
data: Optional[bytes] = None,
json_data: Optional[Dict[str, Any]] = None,
timeout: float = 30.0,
) -> Dict[str, Any]:
return self.request(
"POST", url, headers=headers, data=data, json_data=json_data, timeout=timeout
)
def set_default_headers(self, headers: Dict[str, str]):
self.session.headers.update(headers)
http_client = SyncHTTPClient()

View File

@ -0,0 +1,109 @@
import logging
logger = logging.getLogger("rp")
KNOWLEDGE_MESSAGE_MARKER = "[KNOWLEDGE_BASE_CONTEXT]"
def inject_knowledge_context(assistant, user_message):
if not hasattr(assistant, "enhanced") or not assistant.enhanced:
return
messages = assistant.messages
for i in range(len(messages) - 1, -1, -1):
if messages[i].get("role") == "user" and KNOWLEDGE_MESSAGE_MARKER in messages[i].get(
"content", ""
):
del messages[i]
logger.debug(f"Removed existing knowledge base message at index {i}")
break
try:
knowledge_results = assistant.enhanced.knowledge_store.search_entries(user_message, top_k=5)
conversation_results = []
if hasattr(assistant.enhanced, "conversation_memory"):
history_results = assistant.enhanced.conversation_memory.search_conversations(
user_message, limit=3
)
for conv in history_results:
conv_messages = assistant.enhanced.conversation_memory.get_conversation_messages(
conv["conversation_id"]
)
for msg in conv_messages[-5:]:
if msg["role"] == "user" and msg["content"] != user_message:
relevance = calculate_text_similarity(user_message, msg["content"])
if relevance > 0.3:
conversation_results.append(
{
"content": msg["content"],
"score": relevance,
"source": f"Previous conversation: {conv['conversation_id'][:8]}",
}
)
all_results = []
for entry in knowledge_results:
score = entry.metadata.get("search_score", 0.5)
all_results.append(
{
"content": entry.content,
"score": score,
"source": f"Knowledge Base ({entry.category})",
"type": "knowledge",
}
)
for conv in conversation_results:
all_results.append(
{
"content": conv["content"],
"score": conv["score"],
"source": conv["source"],
"type": "conversation",
}
)
all_results.sort(key=lambda x: x["score"], reverse=True)
top_results = all_results[:5]
if not top_results:
logger.debug("No relevant knowledge or conversation matches found")
return
knowledge_parts = []
for idx, result in enumerate(top_results, 1):
content = result["content"]
if len(content) > 1500:
content = content[:1500] + "..."
score_indicator = f"({result['score']:.2f})" if result["score"] < 1.0 else "(exact)"
knowledge_parts.append(
f"Match {idx} {score_indicator} - {result['source']}:\n{content}"
)
knowledge_message_content = (
f"{KNOWLEDGE_MESSAGE_MARKER}\nRelevant information from knowledge base and conversation history:\n\n"
+ "\n\n".join(knowledge_parts)
)
knowledge_message = {"role": "user", "content": knowledge_message_content}
messages.append(knowledge_message)
logger.debug(f"Injected enhanced context message with {len(top_results)} matches")
except Exception as e:
logger.error(f"Error injecting knowledge context: {e}")
def calculate_text_similarity(text1: str, text2: str) -> float:
"""Calculate similarity between two texts using word overlap and sequence matching."""
import re
text1_lower = text1.lower()
text2_lower = text2.lower()
if text1_lower in text2_lower or text2_lower in text1_lower:
return 1.0
words1 = set(re.findall("\\b\\w+\\b", text1_lower))
words2 = set(re.findall("\\b\\w+\\b", text2_lower))
if not words1 or not words2:
return 0.0
intersection = words1 & words2
union = words1 | words2
word_similarity = len(intersection) / len(union)
consecutive_bonus = 0.0
words1_list = list(words1)
list(words2)
for i in range(len(words1_list) - 1):
for j in range(i + 2, min(i + 5, len(words1_list) + 1)):
phrase = " ".join(words1_list[i:j])
if phrase in text2_lower:
consecutive_bonus += 0.1 * (j - i)
total_similarity = min(1.0, word_similarity + consecutive_bonus)
return total_similarity

View File

@ -1,21 +1,17 @@
import logging import logging
import os import os
from logging.handlers import RotatingFileHandler from logging.handlers import RotatingFileHandler
from rp.config import LOG_FILE
from pr.config import LOG_FILE
def setup_logging(verbose=False): def setup_logging(verbose=False):
log_dir = os.path.dirname(LOG_FILE) log_dir = os.path.dirname(LOG_FILE)
if log_dir and not os.path.exists(log_dir): if log_dir and (not os.path.exists(log_dir)):
os.makedirs(log_dir, exist_ok=True) os.makedirs(log_dir, exist_ok=True)
logger = logging.getLogger("rp")
logger = logging.getLogger("pr")
logger.setLevel(logging.DEBUG if verbose else logging.INFO) logger.setLevel(logging.DEBUG if verbose else logging.INFO)
if logger.handlers: if logger.handlers:
logger.handlers.clear() logger.handlers.clear()
file_handler = RotatingFileHandler(LOG_FILE, maxBytes=10 * 1024 * 1024, backupCount=5) file_handler = RotatingFileHandler(LOG_FILE, maxBytes=10 * 1024 * 1024, backupCount=5)
file_handler.setLevel(logging.DEBUG) file_handler.setLevel(logging.DEBUG)
file_formatter = logging.Formatter( file_formatter = logging.Formatter(
@ -24,18 +20,16 @@ def setup_logging(verbose=False):
) )
file_handler.setFormatter(file_formatter) file_handler.setFormatter(file_formatter)
logger.addHandler(file_handler) logger.addHandler(file_handler)
if verbose: if verbose:
console_handler = logging.StreamHandler() console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO) console_handler.setLevel(logging.INFO)
console_formatter = logging.Formatter("%(levelname)s: %(message)s") console_formatter = logging.Formatter("%(levelname)s: %(message)s")
console_handler.setFormatter(console_formatter) console_handler.setFormatter(console_formatter)
logger.addHandler(console_handler) logger.addHandler(console_handler)
return logger return logger
def get_logger(name=None): def get_logger(name=None):
if name: if name:
return logging.getLogger(f"pr.{name}") return logging.getLogger(f"rp.{name}")
return logging.getLogger("pr") return logging.getLogger("rp")

View File

@ -2,11 +2,9 @@ import json
import os import os
from datetime import datetime from datetime import datetime
from typing import Dict, List, Optional from typing import Dict, List, Optional
from rp.core.logging import get_logger
from pr.core.logging import get_logger
logger = get_logger("session") logger = get_logger("session")
SESSIONS_DIR = os.path.expanduser("~/.assistant_sessions") SESSIONS_DIR = os.path.expanduser("~/.assistant_sessions")
@ -20,20 +18,16 @@ class SessionManager:
) -> bool: ) -> bool:
try: try:
session_file = os.path.join(SESSIONS_DIR, f"{name}.json") session_file = os.path.join(SESSIONS_DIR, f"{name}.json")
session_data = { session_data = {
"name": name, "name": name,
"created_at": datetime.now().isoformat(), "created_at": datetime.now().isoformat(),
"messages": messages, "messages": messages,
"metadata": metadata or {}, "metadata": metadata or {},
} }
with open(session_file, "w") as f: with open(session_file, "w") as f:
json.dump(session_data, f, indent=2) json.dump(session_data, f, indent=2)
logger.info(f"Session saved: {name}") logger.info(f"Session saved: {name}")
return True return True
except Exception as e: except Exception as e:
logger.error(f"Error saving session {name}: {e}") logger.error(f"Error saving session {name}: {e}")
return False return False
@ -41,24 +35,19 @@ class SessionManager:
def load_session(self, name: str) -> Optional[Dict]: def load_session(self, name: str) -> Optional[Dict]:
try: try:
session_file = os.path.join(SESSIONS_DIR, f"{name}.json") session_file = os.path.join(SESSIONS_DIR, f"{name}.json")
if not os.path.exists(session_file): if not os.path.exists(session_file):
logger.warning(f"Session not found: {name}") logger.warning(f"Session not found: {name}")
return None return None
with open(session_file) as f: with open(session_file) as f:
session_data = json.load(f) session_data = json.load(f)
logger.info(f"Session loaded: {name}") logger.info(f"Session loaded: {name}")
return session_data return session_data
except Exception as e: except Exception as e:
logger.error(f"Error loading session {name}: {e}") logger.error(f"Error loading session {name}: {e}")
return None return None
def list_sessions(self) -> List[Dict]: def list_sessions(self) -> List[Dict]:
sessions = [] sessions = []
try: try:
for filename in os.listdir(SESSIONS_DIR): for filename in os.listdir(SESSIONS_DIR):
if filename.endswith(".json"): if filename.endswith(".json"):
@ -66,7 +55,6 @@ class SessionManager:
try: try:
with open(filepath) as f: with open(filepath) as f:
data = json.load(f) data = json.load(f)
sessions.append( sessions.append(
{ {
"name": data.get("name", filename[:-5]), "name": data.get("name", filename[:-5]),
@ -77,26 +65,20 @@ class SessionManager:
) )
except Exception as e: except Exception as e:
logger.warning(f"Error reading session file {filename}: {e}") logger.warning(f"Error reading session file {filename}: {e}")
sessions.sort(key=lambda x: x["created_at"], reverse=True) sessions.sort(key=lambda x: x["created_at"], reverse=True)
except Exception as e: except Exception as e:
logger.error(f"Error listing sessions: {e}") logger.error(f"Error listing sessions: {e}")
return sessions return sessions
def delete_session(self, name: str) -> bool: def delete_session(self, name: str) -> bool:
try: try:
session_file = os.path.join(SESSIONS_DIR, f"{name}.json") session_file = os.path.join(SESSIONS_DIR, f"{name}.json")
if not os.path.exists(session_file): if not os.path.exists(session_file):
logger.warning(f"Session not found: {name}") logger.warning(f"Session not found: {name}")
return False return False
os.remove(session_file) os.remove(session_file)
logger.info(f"Session deleted: {name}") logger.info(f"Session deleted: {name}")
return True return True
except Exception as e: except Exception as e:
logger.error(f"Error deleting session {name}: {e}") logger.error(f"Error deleting session {name}: {e}")
return False return False
@ -105,47 +87,37 @@ class SessionManager:
session_data = self.load_session(name) session_data = self.load_session(name)
if not session_data: if not session_data:
return False return False
try: try:
if format == "json": if format == "json":
with open(output_path, "w") as f: with open(output_path, "w") as f:
json.dump(session_data, f, indent=2) json.dump(session_data, f, indent=2)
elif format == "markdown": elif format == "markdown":
with open(output_path, "w") as f: with open(output_path, "w") as f:
f.write(f"# Session: {name}\n\n") f.write(f"# Session: {name}\n\n")
f.write(f"Created: {session_data['created_at']}\n\n") f.write(f"Created: {session_data['created_at']}\n\n")
f.write("---\n\n") f.write("---\n\n")
for msg in session_data["messages"]: for msg in session_data["messages"]:
role = msg.get("role", "unknown") role = msg.get("role", "unknown")
content = msg.get("content", "") content = msg.get("content", "")
f.write(f"## {role.capitalize()}\n\n") f.write(f"## {role.capitalize()}\n\n")
f.write(f"{content}\n\n") f.write(f"{content}\n\n")
f.write("---\n\n") f.write("---\n\n")
elif format == "txt": elif format == "txt":
with open(output_path, "w") as f: with open(output_path, "w") as f:
f.write(f"Session: {name}\n") f.write(f"Session: {name}\n")
f.write(f"Created: {session_data['created_at']}\n") f.write(f"Created: {session_data['created_at']}\n")
f.write("=" * 80 + "\n\n") f.write("=" * 80 + "\n\n")
for msg in session_data["messages"]: for msg in session_data["messages"]:
role = msg.get("role", "unknown") role = msg.get("role", "unknown")
content = msg.get("content", "") content = msg.get("content", "")
f.write(f"[{role.upper()}]\n") f.write(f"[{role.upper()}]\n")
f.write(f"{content}\n") f.write(f"{content}\n")
f.write("-" * 80 + "\n\n") f.write("-" * 80 + "\n\n")
else: else:
logger.error(f"Unsupported export format: {format}") logger.error(f"Unsupported export format: {format}")
return False return False
logger.info(f"Session exported to {output_path}") logger.info(f"Session exported to {output_path}")
return True return True
except Exception as e: except Exception as e:
logger.error(f"Error exporting session: {e}") logger.error(f"Error exporting session: {e}")
return False return False

View File

@ -2,15 +2,13 @@ import json
import os import os
from datetime import datetime from datetime import datetime
from typing import Dict, Optional from typing import Dict, Optional
from rp.core.logging import get_logger
from pr.core.logging import get_logger
logger = get_logger("usage") logger = get_logger("usage")
USAGE_DB_FILE = os.path.expanduser("~/.assistant_usage.json") USAGE_DB_FILE = os.path.expanduser("~/.assistant_usage.json")
EXCHANGE_RATE = 1.0
MODEL_COSTS = { MODEL_COSTS = {
"x-ai/grok-code-fast-1": {"input": 0.0, "output": 0.0}, "x-ai/grok-code-fast-1": {"input": 0.0002, "output": 0.0015},
"gpt-4": {"input": 0.03, "output": 0.06}, "gpt-4": {"input": 0.03, "output": 0.06},
"gpt-4-turbo": {"input": 0.01, "output": 0.03}, "gpt-4-turbo": {"input": 0.01, "output": 0.03},
"gpt-3.5-turbo": {"input": 0.0005, "output": 0.0015}, "gpt-3.5-turbo": {"input": 0.0005, "output": 0.0015},
@ -33,40 +31,27 @@ class UsageTracker:
} }
def track_request( def track_request(
self, self, model: str, input_tokens: int, output_tokens: int, total_tokens: Optional[int] = None
model: str,
input_tokens: int,
output_tokens: int,
total_tokens: Optional[int] = None,
): ):
if total_tokens is None: if total_tokens is None:
total_tokens = input_tokens + output_tokens total_tokens = input_tokens + output_tokens
self.session_usage["requests"] += 1 self.session_usage["requests"] += 1
self.session_usage["total_tokens"] += total_tokens self.session_usage["total_tokens"] += total_tokens
self.session_usage["input_tokens"] += input_tokens self.session_usage["input_tokens"] += input_tokens
self.session_usage["output_tokens"] += output_tokens self.session_usage["output_tokens"] += output_tokens
if model not in self.session_usage["models_used"]: if model not in self.session_usage["models_used"]:
self.session_usage["models_used"][model] = { self.session_usage["models_used"][model] = {"requests": 0, "tokens": 0, "cost": 0.0}
"requests": 0,
"tokens": 0,
"cost": 0.0,
}
model_usage = self.session_usage["models_used"][model] model_usage = self.session_usage["models_used"][model]
model_usage["requests"] += 1 model_usage["requests"] += 1
model_usage["tokens"] += total_tokens model_usage["tokens"] += total_tokens
cost = self._calculate_cost(model, input_tokens, output_tokens) cost = self._calculate_cost(model, input_tokens, output_tokens)
model_usage["cost"] += cost model_usage["cost"] += cost
self.session_usage["estimated_cost"] += cost self.session_usage["estimated_cost"] += cost
self._save_to_history(model, input_tokens, output_tokens, cost) self._save_to_history(model, input_tokens, output_tokens, cost)
logger.debug(f"Tracked request: {model}, tokens: {total_tokens}, cost: €{cost:.4f}")
logger.debug(f"Tracked request: {model}, tokens: {total_tokens}, cost: ${cost:.4f}") @staticmethod
def _calculate_cost(model: str, input_tokens: int, output_tokens: int) -> float:
def _calculate_cost(self, model: str, input_tokens: int, output_tokens: int) -> float:
if model not in MODEL_COSTS: if model not in MODEL_COSTS:
base_model = model.split("/")[0] if "/" in model else model base_model = model.split("/")[0] if "/" in model else model
if base_model not in MODEL_COSTS: if base_model not in MODEL_COSTS:
@ -75,10 +60,8 @@ class UsageTracker:
costs = MODEL_COSTS[base_model] costs = MODEL_COSTS[base_model]
else: else:
costs = MODEL_COSTS[model] costs = MODEL_COSTS[model]
input_cost = input_tokens / 1000 * costs["input"] * EXCHANGE_RATE
input_cost = (input_tokens / 1000) * costs["input"] output_cost = output_tokens / 1000 * costs["output"] * EXCHANGE_RATE
output_cost = (output_tokens / 1000) * costs["output"]
return input_cost + output_cost return input_cost + output_cost
def _save_to_history(self, model: str, input_tokens: int, output_tokens: int, cost: float): def _save_to_history(self, model: str, input_tokens: int, output_tokens: int, cost: float):
@ -87,7 +70,6 @@ class UsageTracker:
if os.path.exists(USAGE_DB_FILE): if os.path.exists(USAGE_DB_FILE):
with open(USAGE_DB_FILE) as f: with open(USAGE_DB_FILE) as f:
history = json.load(f) history = json.load(f)
history.append( history.append(
{ {
"timestamp": datetime.now().isoformat(), "timestamp": datetime.now().isoformat(),
@ -98,13 +80,10 @@ class UsageTracker:
"cost": cost, "cost": cost,
} }
) )
if len(history) > 10000: if len(history) > 10000:
history = history[-10000:] history = history[-10000:]
with open(USAGE_DB_FILE, "w") as f: with open(USAGE_DB_FILE, "w") as f:
json.dump(history, f, indent=2) json.dump(history, f, indent=2)
except Exception as e: except Exception as e:
logger.error(f"Error saving usage history: {e}") logger.error(f"Error saving usage history: {e}")
@ -121,35 +100,28 @@ class UsageTracker:
f" Output: {usage['output_tokens']:,}", f" Output: {usage['output_tokens']:,}",
f"Estimated Cost: ${usage['estimated_cost']:.4f}", f"Estimated Cost: ${usage['estimated_cost']:.4f}",
] ]
if usage["models_used"]: if usage["models_used"]:
lines.append("\nModels Used:") lines.append("\nModels Used:")
for model, stats in usage["models_used"].items(): for model, stats in usage["models_used"].items():
lines.append( lines.append(
f" {model}: {stats['requests']} requests, " f" {model}: {stats['requests']} requests, {stats['tokens']:,} tokens, ${stats['cost']:.4f}"
f"{stats['tokens']:,} tokens, ${stats['cost']:.4f}"
) )
return "\n".join(lines) return "\n".join(lines)
@staticmethod @staticmethod
def get_total_usage() -> Dict: def get_total_usage() -> Dict:
if not os.path.exists(USAGE_DB_FILE): if not os.path.exists(USAGE_DB_FILE):
return {"total_requests": 0, "total_tokens": 0, "total_cost": 0.0} return {"total_requests": 0, "total_tokens": 0, "total_cost": 0.0}
try: try:
with open(USAGE_DB_FILE) as f: with open(USAGE_DB_FILE) as f:
history = json.load(f) history = json.load(f)
total_tokens = sum((entry["total_tokens"] for entry in history))
total_tokens = sum(entry["total_tokens"] for entry in history) total_cost = sum((entry["cost"] for entry in history))
total_cost = sum(entry["cost"] for entry in history)
return { return {
"total_requests": len(history), "total_requests": len(history),
"total_tokens": total_tokens, "total_tokens": total_tokens,
"total_cost": total_cost, "total_cost": total_cost,
} }
except Exception as e: except Exception as e:
logger.error(f"Error loading usage history: {e}") logger.error(f"Error loading usage history: {e}")
return {"total_requests": 0, "total_tokens": 0, "total_cost": 0.0} return {"total_requests": 0, "total_tokens": 0, "total_cost": 0.0}

View File

@ -1,86 +1,68 @@
import os import os
from rp.core.exceptions import ValidationError
from pr.core.exceptions import ValidationError
def validate_file_path(path: str, must_exist: bool = False) -> str: def validate_file_path(path: str, must_exist: bool = False) -> str:
if not path: if not path:
raise ValidationError("File path cannot be empty") raise ValidationError("File path cannot be empty")
if must_exist and (not os.path.exists(path)):
if must_exist and not os.path.exists(path):
raise ValidationError(f"File does not exist: {path}") raise ValidationError(f"File does not exist: {path}")
if must_exist and os.path.isdir(path): if must_exist and os.path.isdir(path):
raise ValidationError(f"Path is a directory, not a file: {path}") raise ValidationError(f"Path is a directory, not a file: {path}")
return os.path.abspath(path) return os.path.abspath(path)
def validate_directory_path(path: str, must_exist: bool = False, create: bool = False) -> str: def validate_directory_path(path: str, must_exist: bool = False, create: bool = False) -> str:
if not path: if not path:
raise ValidationError("Directory path cannot be empty") raise ValidationError("Directory path cannot be empty")
abs_path = os.path.abspath(path) abs_path = os.path.abspath(path)
if must_exist and (not os.path.exists(abs_path)):
if must_exist and not os.path.exists(abs_path):
if create: if create:
os.makedirs(abs_path, exist_ok=True) os.makedirs(abs_path, exist_ok=True)
else: else:
raise ValidationError(f"Directory does not exist: {abs_path}") raise ValidationError(f"Directory does not exist: {abs_path}")
if os.path.exists(abs_path) and (not os.path.isdir(abs_path)):
if os.path.exists(abs_path) and not os.path.isdir(abs_path):
raise ValidationError(f"Path is not a directory: {abs_path}") raise ValidationError(f"Path is not a directory: {abs_path}")
return abs_path return abs_path
def validate_model_name(model: str) -> str: def validate_model_name(model: str) -> str:
if not model: if not model:
raise ValidationError("Model name cannot be empty") raise ValidationError("Model name cannot be empty")
if len(model) < 2: if len(model) < 2:
raise ValidationError("Model name too short") raise ValidationError("Model name too short")
return model return model
def validate_api_url(url: str) -> str: def validate_api_url(url: str) -> str:
if not url: if not url:
raise ValidationError("API URL cannot be empty") raise ValidationError("API URL cannot be empty")
if not url.startswith(("http://", "https://")): if not url.startswith(("http://", "https://")):
raise ValidationError("API URL must start with http:// or https://") raise ValidationError("API URL must start with http:// or https://")
return url return url
def validate_session_name(name: str) -> str: def validate_session_name(name: str) -> str:
if not name: if not name:
raise ValidationError("Session name cannot be empty") raise ValidationError("Session name cannot be empty")
invalid_chars = ["/", "\\", ":", "*", "?", '"', "<", ">", "|"] invalid_chars = ["/", "\\", ":", "*", "?", '"', "<", ">", "|"]
for char in invalid_chars: for char in invalid_chars:
if char in name: if char in name:
raise ValidationError(f"Session name contains invalid character: {char}") raise ValidationError(f"Session name contains invalid character: {char}")
if len(name) > 255: if len(name) > 255:
raise ValidationError("Session name too long (max 255 characters)") raise ValidationError("Session name too long (max 255 characters)")
return name return name
def validate_temperature(temp: float) -> float: def validate_temperature(temp: float) -> float:
if not 0.0 <= temp <= 2.0: if not 0.0 <= temp <= 2.0:
raise ValidationError("Temperature must be between 0.0 and 2.0") raise ValidationError("Temperature must be between 0.0 and 2.0")
return temp return temp
def validate_max_tokens(tokens: int) -> int: def validate_max_tokens(tokens: int) -> int:
if tokens < 1: if tokens < 1:
raise ValidationError("Max tokens must be at least 1") raise ValidationError("Max tokens must be at least 1")
if tokens > 100000: if tokens > 100000:
raise ValidationError("Max tokens too high (max 100000)") raise ValidationError("Max tokens too high (max 100000)")
return tokens return tokens

View File

@ -1,4 +1,3 @@
#!/usr/bin/env python3
import atexit import atexit
import curses import curses
import os import os
@ -13,6 +12,7 @@ import time
class RPEditor: class RPEditor:
def __init__(self, filename=None, auto_save=False, timeout=30): def __init__(self, filename=None, auto_save=False, timeout=30):
""" """
Initialize RPEditor with enhanced robustness features. Initialize RPEditor with enhanced robustness features.
@ -46,8 +46,6 @@ class RPEditor:
self._cleanup_registered = False self._cleanup_registered = False
self._original_terminal_state = None self._original_terminal_state = None
self._exception_occurred = False self._exception_occurred = False
# Create socket pair with error handling
try: try:
self.client_sock, self.server_sock = socket.socketpair() self.client_sock, self.server_sock = socket.socketpair()
self.client_sock.settimeout(self.timeout) self.client_sock.settimeout(self.timeout)
@ -55,10 +53,7 @@ class RPEditor:
except Exception as e: except Exception as e:
self._cleanup() self._cleanup()
raise RuntimeError(f"Failed to create socket pair: {e}") raise RuntimeError(f"Failed to create socket pair: {e}")
# Register cleanup handlers
self._register_cleanup() self._register_cleanup()
if filename: if filename:
self.load_file() self.load_file()
@ -78,17 +73,12 @@ class RPEditor:
def _cleanup(self): def _cleanup(self):
"""Comprehensive cleanup of all resources.""" """Comprehensive cleanup of all resources."""
try: try:
# Stop the editor
self.running = False self.running = False
if self.auto_save and self.filename and (not self._exception_occurred):
# Save if auto-save is enabled
if self.auto_save and self.filename and not self._exception_occurred:
try: try:
self._save_file() self._save_file()
except: except:
pass pass
# Clean up curses
if self.stdscr: if self.stdscr:
try: try:
self.stdscr.keypad(False) self.stdscr.keypad(False)
@ -102,26 +92,19 @@ class RPEditor:
curses.endwin() curses.endwin()
except: except:
pass pass
# Clear screen after curses cleanup
try: try:
os.system("clear" if os.name != "nt" else "cls") os.system("clear" if os.name != "nt" else "cls")
except: except:
pass pass
# Close sockets
for sock in [self.client_sock, self.server_sock]: for sock in [self.client_sock, self.server_sock]:
if sock: if sock:
try: try:
sock.close() sock.close()
except: except:
pass pass
# Wait for threads to finish
for thread in [self.thread, self.socket_thread]: for thread in [self.thread, self.socket_thread]:
if thread and thread.is_alive(): if thread and thread.is_alive():
thread.join(timeout=1) thread.join(timeout=1)
except: except:
pass pass
@ -136,16 +119,13 @@ class RPEditor:
self.lines = [""] self.lines = [""]
except Exception: except Exception:
self.lines = [""] self.lines = [""]
# Don't raise, just use empty content
def _save_file(self): def _save_file(self):
"""Save file with enhanced error handling and backup.""" """Save file with enhanced error handling and backup."""
with self.lock: with self.lock:
if not self.filename: if not self.filename:
return False return False
try: try:
# Create backup if file exists
if os.path.exists(self.filename): if os.path.exists(self.filename):
backup_name = f"{self.filename}.bak" backup_name = f"{self.filename}.bak"
try: try:
@ -154,9 +134,7 @@ class RPEditor:
with open(backup_name, "w", encoding="utf-8") as f: with open(backup_name, "w", encoding="utf-8") as f:
f.write(backup_content) f.write(backup_content)
except: except:
pass # Backup failed, but continue with save pass
# Save the file
with open(self.filename, "w", encoding="utf-8") as f: with open(self.filename, "w", encoding="utf-8") as f:
f.write("\n".join(self.lines)) f.write("\n".join(self.lines))
return True return True
@ -170,13 +148,12 @@ class RPEditor:
try: try:
self.client_sock.send(pickle.dumps({"command": "save_file"})) self.client_sock.send(pickle.dumps({"command": "save_file"}))
except: except:
return self._save_file() # Fallback to direct save return self._save_file()
def start(self): def start(self):
"""Start the editor with enhanced error handling.""" """Start the editor with enhanced error handling."""
if self.running: if self.running:
return False return False
try: try:
self.running = True self.running = True
self.socket_thread = threading.Thread(target=self.socket_listener, daemon=True) self.socket_thread = threading.Thread(target=self.socket_listener, daemon=True)
@ -196,9 +173,8 @@ class RPEditor:
self.client_sock.send(pickle.dumps({"command": "stop"})) self.client_sock.send(pickle.dumps({"command": "stop"}))
except: except:
pass pass
self.running = False self.running = False
time.sleep(0.1) # Give threads time to finish time.sleep(0.1)
self._cleanup() self._cleanup()
def run(self): def run(self):
@ -212,16 +188,12 @@ class RPEditor:
def main_loop(self, stdscr): def main_loop(self, stdscr):
"""Main editor loop with enhanced error recovery.""" """Main editor loop with enhanced error recovery."""
self.stdscr = stdscr self.stdscr = stdscr
try: try:
# Configure curses
curses.curs_set(1) curses.curs_set(1)
self.stdscr.keypad(True) self.stdscr.keypad(True)
self.stdscr.timeout(100) # Non-blocking with timeout self.stdscr.timeout(100)
while self.running: while self.running:
try: try:
# Process queued commands
while True: while True:
try: try:
command = self.command_queue.get_nowait() command = self.command_queue.get_nowait()
@ -229,24 +201,17 @@ class RPEditor:
self.execute_command(command) self.execute_command(command)
except queue.Empty: except queue.Empty:
break break
# Draw screen
with self.lock: with self.lock:
self.draw() self.draw()
# Handle input
try: try:
key = self.stdscr.getch() key = self.stdscr.getch()
if key != -1: # -1 means timeout/no input if key != -1:
with self.lock: with self.lock:
self.handle_key(key) self.handle_key(key)
except curses.error: except curses.error:
pass # Ignore curses errors pass
except Exception: except Exception:
# Log error but continue running
pass pass
except Exception: except Exception:
self._exception_occurred = True self._exception_occurred = True
finally: finally:
@ -257,39 +222,30 @@ class RPEditor:
try: try:
self.stdscr.clear() self.stdscr.clear()
height, width = self.stdscr.getmaxyx() height, width = self.stdscr.getmaxyx()
# Draw lines
for i, line in enumerate(self.lines): for i, line in enumerate(self.lines):
if i >= height - 1: if i >= height - 1:
break break
try: try:
# Handle long lines and special characters
display_line = line[: width - 1] if len(line) >= width else line display_line = line[: width - 1] if len(line) >= width else line
self.stdscr.addstr(i, 0, display_line) self.stdscr.addstr(i, 0, display_line)
except curses.error: except curses.error:
pass # Skip lines that can't be displayed pass
status = f"{self.mode.upper()} | {self.filename or 'untitled'} | {self.cursor_y + 1}:{self.cursor_x + 1}"
# Draw status line
status = f"{self.mode.upper()} | {self.filename or 'untitled'} | {self.cursor_y+1}:{self.cursor_x+1}"
if self.mode == "command": if self.mode == "command":
status = self.command[: width - 1] status = self.command[: width - 1]
try: try:
self.stdscr.addstr(height - 1, 0, status[: width - 1]) self.stdscr.addstr(height - 1, 0, status[: width - 1])
except curses.error: except curses.error:
pass pass
# Position cursor
cursor_x = min(self.cursor_x, width - 1) cursor_x = min(self.cursor_x, width - 1)
cursor_y = min(self.cursor_y, height - 2) cursor_y = min(self.cursor_y, height - 2)
try: try:
self.stdscr.move(cursor_y, cursor_x) self.stdscr.move(cursor_y, cursor_x)
except curses.error: except curses.error:
pass pass
self.stdscr.refresh() self.stdscr.refresh()
except Exception: except Exception:
pass # Continue even if draw fails pass
def handle_key(self, key): def handle_key(self, key):
"""Handle keyboard input with error recovery.""" """Handle keyboard input with error recovery."""
@ -301,7 +257,7 @@ class RPEditor:
elif self.mode == "command": elif self.mode == "command":
self.handle_command(key) self.handle_command(key)
except Exception: except Exception:
pass # Continue on error pass
def handle_normal(self, key): def handle_normal(self, key):
"""Handle normal mode keys.""" """Handle normal mode keys."""
@ -370,9 +326,10 @@ class RPEditor:
self.cursor_x = 0 self.cursor_x = 0
elif key == ord("u"): elif key == ord("u"):
self.undo() self.undo()
elif key == ord("r") and self.prev_key == 18: # Ctrl-R elif key == 18:
self.redo() self.redo()
elif key == 19:
self._save_file()
self.prev_key = key self.prev_key = key
except Exception: except Exception:
pass pass
@ -383,10 +340,8 @@ class RPEditor:
return return
line = self.lines[self.cursor_y] line = self.lines[self.cursor_y]
i = self.cursor_x i = self.cursor_x
# Skip non-alphanumeric while i < len(line) and (not line[i].isalnum()):
while i < len(line) and not line[i].isalnum():
i += 1 i += 1
# Skip alphanumeric
while i < len(line) and line[i].isalnum(): while i < len(line) and line[i].isalnum():
i += 1 i += 1
self.cursor_x = i self.cursor_x = i
@ -397,10 +352,8 @@ class RPEditor:
return return
line = self.lines[self.cursor_y] line = self.lines[self.cursor_y]
i = max(0, self.cursor_x - 1) i = max(0, self.cursor_x - 1)
# Skip non-alphanumeric while i >= 0 and (not line[i].isalnum()):
while i >= 0 and not line[i].isalnum():
i -= 1 i -= 1
# Skip alphanumeric
while i >= 0 and line[i].isalnum(): while i >= 0 and line[i].isalnum():
i -= 1 i -= 1
self.cursor_x = max(0, i + 1) self.cursor_x = max(0, i + 1)
@ -408,11 +361,11 @@ class RPEditor:
def handle_insert(self, key): def handle_insert(self, key):
"""Handle insert mode keys.""" """Handle insert mode keys."""
try: try:
if key == 27: # ESC if key == 27:
self.mode = "normal" self.mode = "normal"
if self.cursor_x > 0: if self.cursor_x > 0:
self.cursor_x -= 1 self.cursor_x -= 1
elif key == 10 or key == 13: # Enter elif key == 10 or key == 13:
self._split_line() self._split_line()
elif key == curses.KEY_BACKSPACE or key == 127 or key == 8: elif key == curses.KEY_BACKSPACE or key == 127 or key == 8:
self._backspace() self._backspace()
@ -425,7 +378,7 @@ class RPEditor:
def handle_command(self, key): def handle_command(self, key):
"""Handle command mode keys.""" """Handle command mode keys."""
try: try:
if key == 10 or key == 13: # Enter if key == 10 or key == 13:
cmd = self.command[1:].strip() cmd = self.command[1:].strip()
if cmd in ["q", "q!"]: if cmd in ["q", "q!"]:
self.running = False self.running = False
@ -439,7 +392,7 @@ class RPEditor:
self._save_file() self._save_file()
self.mode = "normal" self.mode = "normal"
self.command = "" self.command = ""
elif key == 27: # ESC elif key == 27:
self.mode = "normal" self.mode = "normal"
self.command = "" self.command = ""
elif key == curses.KEY_BACKSPACE or key == 127 or key == 8: elif key == curses.KEY_BACKSPACE or key == 127 or key == 8:
@ -455,14 +408,10 @@ class RPEditor:
"""Move cursor with bounds checking.""" """Move cursor with bounds checking."""
if not self.lines: if not self.lines:
self.lines = [""] self.lines = [""]
new_y = self.cursor_y + dy new_y = self.cursor_y + dy
new_x = self.cursor_x + dx new_x = self.cursor_x + dx
# Ensure valid Y position
if 0 <= new_y < len(self.lines): if 0 <= new_y < len(self.lines):
self.cursor_y = new_y self.cursor_y = new_y
# Ensure valid X position for new line
max_x = len(self.lines[self.cursor_y]) max_x = len(self.lines[self.cursor_y])
self.cursor_x = max(0, min(new_x, max_x)) self.cursor_x = max(0, min(new_x, max_x))
elif new_y < 0: elif new_y < 0:
@ -499,8 +448,7 @@ class RPEditor:
self.lines = state["lines"] self.lines = state["lines"]
self.cursor_y = min(state["cursor_y"], len(self.lines) - 1) self.cursor_y = min(state["cursor_y"], len(self.lines) - 1)
self.cursor_x = min( self.cursor_x = min(
state["cursor_x"], state["cursor_x"], len(self.lines[self.cursor_y]) if self.lines else 0
len(self.lines[self.cursor_y]) if self.lines else 0,
) )
def redo(self): def redo(self):
@ -517,41 +465,32 @@ class RPEditor:
self.lines = state["lines"] self.lines = state["lines"]
self.cursor_y = min(state["cursor_y"], len(self.lines) - 1) self.cursor_y = min(state["cursor_y"], len(self.lines) - 1)
self.cursor_x = min( self.cursor_x = min(
state["cursor_x"], state["cursor_x"], len(self.lines[self.cursor_y]) if self.lines else 0
len(self.lines[self.cursor_y]) if self.lines else 0,
) )
def _insert_text(self, text): def _insert_text(self, text):
"""Insert text at cursor position.""" """Insert text at cursor position."""
if not text: if not text:
return return
self.save_state() self.save_state()
lines = text.split("\n") lines = text.split("\n")
if len(lines) == 1: if len(lines) == 1:
# Single line insert
if self.cursor_y >= len(self.lines): if self.cursor_y >= len(self.lines):
self.lines.append("") self.lines.append("")
self.cursor_y = len(self.lines) - 1 self.cursor_y = len(self.lines) - 1
line = self.lines[self.cursor_y] line = self.lines[self.cursor_y]
self.lines[self.cursor_y] = line[: self.cursor_x] + text + line[self.cursor_x :] self.lines[self.cursor_y] = line[: self.cursor_x] + text + line[self.cursor_x :]
self.cursor_x += len(text) self.cursor_x += len(text)
else: else:
# Multi-line insert
if self.cursor_y >= len(self.lines): if self.cursor_y >= len(self.lines):
self.lines.append("") self.lines.append("")
self.cursor_y = len(self.lines) - 1 self.cursor_y = len(self.lines) - 1
first = self.lines[self.cursor_y][: self.cursor_x] + lines[0] first = self.lines[self.cursor_y][: self.cursor_x] + lines[0]
last = lines[-1] + self.lines[self.cursor_y][self.cursor_x :] last = lines[-1] + self.lines[self.cursor_y][self.cursor_x :]
self.lines[self.cursor_y] = first self.lines[self.cursor_y] = first
for i in range(1, len(lines) - 1): for i in range(1, len(lines) - 1):
self.lines.insert(self.cursor_y + i, lines[i]) self.lines.insert(self.cursor_y + i, lines[i])
self.lines.insert(self.cursor_y + len(lines) - 1, last) self.lines.insert(self.cursor_y + len(lines) - 1, last)
self.cursor_y += len(lines) - 1 self.cursor_y += len(lines) - 1
self.cursor_x = len(lines[-1]) self.cursor_x = len(lines[-1])
@ -583,7 +522,6 @@ class RPEditor:
if self.cursor_y >= len(self.lines): if self.cursor_y >= len(self.lines):
self.lines.append("") self.lines.append("")
self.cursor_y = len(self.lines) - 1 self.cursor_y = len(self.lines) - 1
line = self.lines[self.cursor_y] line = self.lines[self.cursor_y]
self.lines[self.cursor_y] = line[: self.cursor_x] + char + line[self.cursor_x :] self.lines[self.cursor_y] = line[: self.cursor_x] + char + line[self.cursor_x :]
self.cursor_x += 1 self.cursor_x += 1
@ -593,7 +531,6 @@ class RPEditor:
if self.cursor_y >= len(self.lines): if self.cursor_y >= len(self.lines):
self.lines.append("") self.lines.append("")
self.cursor_y = len(self.lines) - 1 self.cursor_y = len(self.lines) - 1
line = self.lines[self.cursor_y] line = self.lines[self.cursor_y]
self.lines[self.cursor_y] = line[: self.cursor_x] self.lines[self.cursor_y] = line[: self.cursor_x]
self.lines.insert(self.cursor_y + 1, line[self.cursor_x :]) self.lines.insert(self.cursor_y + 1, line[self.cursor_x :])
@ -719,7 +656,6 @@ class RPEditor:
"""Execute command with error handling.""" """Execute command with error handling."""
try: try:
cmd = command.get("command") cmd = command.get("command")
if cmd == "insert_text": if cmd == "insert_text":
self._insert_text(command.get("text", "")) self._insert_text(command.get("text", ""))
elif cmd == "delete_char": elif cmd == "delete_char":
@ -749,8 +685,6 @@ class RPEditor:
except Exception: except Exception:
pass pass
# Additional public methods for backwards compatibility
def move_cursor_to(self, y, x): def move_cursor_to(self, y, x):
"""Move cursor to specific position.""" """Move cursor to specific position."""
with self.lock: with self.lock:
@ -793,11 +727,8 @@ class RPEditor:
"""Replace text in range.""" """Replace text in range."""
with self.lock: with self.lock:
self.save_state() self.save_state()
# Validate bounds
start_line = max(0, min(start_line, len(self.lines) - 1)) start_line = max(0, min(start_line, len(self.lines) - 1))
end_line = max(0, min(end_line, len(self.lines) - 1)) end_line = max(0, min(end_line, len(self.lines) - 1))
if start_line == end_line: if start_line == end_line:
line = self.lines[start_line] line = self.lines[start_line]
start_col = max(0, min(start_col, len(line))) start_col = max(0, min(start_col, len(line)))
@ -847,17 +778,12 @@ class RPEditor:
with self.lock: with self.lock:
if not self.selection_start or not self.selection_end: if not self.selection_start or not self.selection_end:
return "" return ""
sl, sc = self.selection_start sl, sc = self.selection_start
el, ec = self.selection_end el, ec = self.selection_end
if sl < 0 or sl >= len(self.lines) or el < 0 or (el >= len(self.lines)):
# Validate bounds
if sl < 0 or sl >= len(self.lines) or el < 0 or el >= len(self.lines):
return "" return ""
if sl == el: if sl == el:
return self.lines[sl][sc:ec] return self.lines[sl][sc:ec]
result = [self.lines[sl][sc:]] result = [self.lines[sl][sc:]]
for i in range(sl + 1, el): for i in range(sl + 1, el):
if i < len(self.lines): if i < len(self.lines):
@ -885,7 +811,6 @@ class RPEditor:
self.save_state() self.save_state()
search_lines = search_block.splitlines() search_lines = search_block.splitlines()
replace_lines = replace_block.splitlines() replace_lines = replace_block.splitlines()
for i in range(len(self.lines) - len(search_lines) + 1): for i in range(len(self.lines) - len(search_lines) + 1):
match = True match = True
for j, search_line in enumerate(search_lines): for j, search_line in enumerate(search_lines):
@ -895,9 +820,7 @@ class RPEditor:
if self.lines[i + j].strip() != search_line.strip(): if self.lines[i + j].strip() != search_line.strip():
match = False match = False
break break
if match: if match:
# Preserve indentation
indent = len(self.lines[i]) - len(self.lines[i].lstrip()) indent = len(self.lines[i]) - len(self.lines[i].lstrip())
indented_replace = [" " * indent + line for line in replace_lines] indented_replace = [" " * indent + line for line in replace_lines]
self.lines[i : i + len(search_lines)] = indented_replace self.lines[i : i + len(search_lines)] = indented_replace
@ -911,10 +834,9 @@ class RPEditor:
try: try:
lines = diff_text.split("\n") lines = diff_text.split("\n")
start_line = 0 start_line = 0
for line in lines: for line in lines:
if line.startswith("@@"): if line.startswith("@@"):
match = re.search(r"@@ -(\d+),?(\d*) \+(\d+),?(\d*) @@", line) match = re.search("@@ -(\\d+),?(\\d*) \\+(\\d+),?(\\d*) @@", line)
if match: if match:
start_line = int(match.group(1)) - 1 start_line = int(match.group(1)) - 1
elif line.startswith("-"): elif line.startswith("-"):
@ -923,7 +845,7 @@ class RPEditor:
elif line.startswith("+"): elif line.startswith("+"):
self.lines.insert(start_line, line[1:]) self.lines.insert(start_line, line[1:])
start_line += 1 start_line += 1
elif line and not line.startswith("\\"): elif line and (not line.startswith("\\")):
start_line += 1 start_line += 1
except Exception: except Exception:
pass pass
@ -977,18 +899,11 @@ def main():
editor = None editor = None
try: try:
filename = sys.argv[1] if len(sys.argv) > 1 else None filename = sys.argv[1] if len(sys.argv) > 1 else None
# Parse additional arguments
auto_save = "--auto-save" in sys.argv auto_save = "--auto-save" in sys.argv
# Create and start editor
editor = RPEditor(filename, auto_save=auto_save) editor = RPEditor(filename, auto_save=auto_save)
editor.start() editor.start()
# Wait for editor to finish
if editor.thread: if editor.thread:
editor.thread.join() editor.thread.join()
except KeyboardInterrupt: except KeyboardInterrupt:
pass pass
except Exception as e: except Exception as e:
@ -996,7 +911,6 @@ def main():
finally: finally:
if editor: if editor:
editor.stop() editor.stop()
# Ensure screen is cleared
os.system("clear" if os.name != "nt" else "cls") os.system("clear" if os.name != "nt" else "cls")

View File

@ -1,4 +1,3 @@
#!/usr/bin/env python3
import curses import curses
import pickle import pickle
import queue import queue
@ -9,6 +8,7 @@ import threading
class RPEditor: class RPEditor:
def __init__(self, filename=None): def __init__(self, filename=None):
self.filename = filename self.filename = filename
self.lines = [""] self.lines = [""]
@ -97,7 +97,7 @@ class RPEditor:
for i, line in enumerate(self.lines): for i, line in enumerate(self.lines):
if i < height - 1: if i < height - 1:
self.stdscr.addstr(i, 0, line[:width]) self.stdscr.addstr(i, 0, line[:width])
status = f"{self.mode.upper()} | {self.filename or 'untitled'} | {self.cursor_y+1}:{self.cursor_x+1}" status = f"{self.mode.upper()} | {self.filename or 'untitled'} | {self.cursor_y + 1}:{self.cursor_x + 1}"
self.stdscr.addstr(height - 1, 0, status[:width]) self.stdscr.addstr(height - 1, 0, status[:width])
if self.mode == "command": if self.mode == "command":
self.stdscr.addstr(height - 1, 0, self.command[:width]) self.stdscr.addstr(height - 1, 0, self.command[:width])
@ -161,7 +161,7 @@ class RPEditor:
elif key == ord("w"): elif key == ord("w"):
line = self.lines[self.cursor_y] line = self.lines[self.cursor_y]
i = self.cursor_x i = self.cursor_x
while i < len(line) and not line[i].isalnum(): while i < len(line) and (not line[i].isalnum()):
i += 1 i += 1
while i < len(line) and line[i].isalnum(): while i < len(line) and line[i].isalnum():
i += 1 i += 1
@ -169,7 +169,7 @@ class RPEditor:
elif key == ord("b"): elif key == ord("b"):
line = self.lines[self.cursor_y] line = self.lines[self.cursor_y]
i = self.cursor_x - 1 i = self.cursor_x - 1
while i >= 0 and not line[i].isalnum(): while i >= 0 and (not line[i].isalnum()):
i -= 1 i -= 1
while i >= 0 and line[i].isalnum(): while i >= 0 and line[i].isalnum():
i -= 1 i -= 1
@ -211,7 +211,7 @@ class RPEditor:
self.running = False self.running = False
elif cmd == "w": elif cmd == "w":
self._save_file() self._save_file()
elif cmd == "wq" or cmd == "wq!" or cmd == "x" or cmd == "xq" or cmd == "x!": elif cmd == "wq" or cmd == "wq!" or cmd == "x" or (cmd == "xq") or (cmd == "x!"):
self._save_file() self._save_file()
self.running = False self.running = False
elif cmd.startswith("w "): elif cmd.startswith("w "):
@ -562,7 +562,7 @@ class RPEditor:
lines = diff_text.split("\n") lines = diff_text.split("\n")
for line in lines: for line in lines:
if line.startswith("@@"): if line.startswith("@@"):
match = re.search(r"@@ -(\d+),?(\d*) \+(\d+),?(\d*) @@", line) match = re.search("@@ -(\\d+),?(\\d*) \\+(\\d+),?(\\d*) @@", line)
if match: if match:
start_line = int(match.group(1)) - 1 start_line = int(match.group(1)) - 1
elif line.startswith("-"): elif line.startswith("-"):

457
rp/implode.py Normal file
View File

@ -0,0 +1,457 @@
"""
impLODE: A Python script to consolidate a multi-file Python project
into a single, runnable file.
It intelligently resolves local imports, hoists external dependencies to the top,
and preserves the core logic, using AST for safe transformations.
"""
import os
import sys
import ast
import argparse
import logging
import py_compile
from typing import Set, Dict, Optional, TextIO
logger = logging.getLogger("impLODE")
class ImportTransformer(ast.NodeTransformer):
"""
An AST transformer that visits Import and ImportFrom nodes.
On Pass 1 (Dry Run):
- Identifies all local vs. external imports.
- Recursively calls the main resolver for local modules.
- Stores external and __future__ imports in the Imploder instance.
On Pass 2 (Write Run):
- Recursively calls the main resolver for local modules.
- Removes all import statements (since they were hoisted in Pass 1).
"""
def __init__(
self,
imploder: "Imploder",
current_file_path: str,
f_out: Optional[TextIO],
is_dry_run: bool,
indent_level: int = 0,
):
self.imploder = imploder
self.current_file_path = current_file_path
self.current_dir = os.path.dirname(current_file_path)
self.f_out = f_out
self.is_dry_run = is_dry_run
self.indent = " " * indent_level
self.logger = logging.getLogger(self.__class__.__name__)
def _log_debug(self, msg: str):
"""Helper for indented debug logging."""
self.logger.debug(f"{self.indent} > {msg}")
def _find_local_module(self, module_name: str, level: int) -> Optional[str]:
"""
Tries to find the absolute path for a given module name and relative level.
Returns None if it's not a local module *and* cannot be found in site-packages.
"""
if not module_name:
base_path = self.current_dir
if level > 0:
for _ in range(level - 1):
base_path = os.path.dirname(base_path)
return base_path
base_path = self.current_dir
if level > 0:
for _ in range(level - 1):
base_path = os.path.dirname(base_path)
else:
base_path = self.imploder.root_dir
module_parts = module_name.split(".")
module_path = os.path.join(base_path, *module_parts)
package_init = os.path.join(module_path, "__init__.py")
if os.path.isfile(package_init):
self._log_debug(
f"Resolved '{module_name}' to local package: {os.path.relpath(package_init, self.imploder.root_dir)}"
)
return package_init
module_py = module_path + ".py"
if os.path.isfile(module_py):
self._log_debug(
f"Resolved '{module_name}' to local module: {os.path.relpath(module_py, self.imploder.root_dir)}"
)
return module_py
if level == 0:
self._log_debug(
f"Module '{module_name}' not found at primary path. Starting deep fallback search from {self.imploder.root_dir}..."
)
target_path_py = os.path.join(*module_parts) + ".py"
target_path_init = os.path.join(*module_parts, "__init__.py")
for dirpath, dirnames, filenames in os.walk(self.imploder.root_dir, topdown=True):
dirnames[:] = [
d
for d in dirnames
if not d.startswith(".")
and d not in ("venv", "env", ".venv", ".env", "__pycache__", "node_modules")
]
check_file_py = os.path.join(dirpath, target_path_py)
if os.path.isfile(check_file_py):
self._log_debug(
f"Fallback search found module: {os.path.relpath(check_file_py, self.imploder.root_dir)}"
)
return check_file_py
check_file_init = os.path.join(dirpath, target_path_init)
if os.path.isfile(check_file_init):
self._log_debug(
f"Fallback search found package: {os.path.relpath(check_file_init, self.imploder.root_dir)}"
)
return check_file_init
return None
def visit_Import(self, node: ast.Import) -> Optional[ast.AST]:
"""Handles `import foo` or `import foo.bar`."""
for alias in node.names:
module_path = self._find_local_module(alias.name, level=0)
if module_path:
self._log_debug(f"Resolving local import: `import {alias.name}`")
self.imploder.resolve_file(
file_abs_path=module_path,
f_out=self.f_out,
is_dry_run=self.is_dry_run,
indent_level=self.imploder.current_indent_level,
)
else:
self._log_debug(f"Found external import: `import {alias.name}`")
if self.is_dry_run:
key = f"import {alias.name}"
if key not in self.imploder.external_imports:
self.imploder.external_imports[key] = node
module_names = ", ".join([a.name for a in node.names])
new_call = ast.Call(
func=ast.Name(id="_implode_log_import", ctx=ast.Load()),
args=[
ast.Constant(value=module_names),
ast.Constant(value=0),
ast.Constant(value="import"),
],
keywords=[],
)
return ast.Expr(value=new_call)
def visit_ImportFrom(self, node: ast.ImportFrom) -> Optional[ast.AST]:
"""Handles `from foo import bar` or `from .foo import bar`."""
module_name_str = node.module or ""
import_type = "from-import"
if module_name_str == "__future__":
import_type = "future-import"
new_call = ast.Call(
func=ast.Name(id="_implode_log_import", ctx=ast.Load()),
args=[
ast.Constant(value=module_name_str),
ast.Constant(value=node.level),
ast.Constant(value=import_type),
],
keywords=[],
)
replacement_node = ast.Expr(value=new_call)
if node.module == "__future__":
self._log_debug("Found __future__ import. Hoisting to top.")
if self.is_dry_run:
key = ast.unparse(node)
self.imploder.future_imports[key] = node
return replacement_node
module_path = self._find_local_module(node.module or "", node.level)
if module_path and os.path.isdir(module_path):
self._log_debug(f"Resolving package import: `from {node.module or '.'} import ...`")
for alias in node.names:
package_module_py = os.path.join(module_path, alias.name + ".py")
package_module_init = os.path.join(module_path, alias.name, "__init__.py")
if os.path.isfile(package_module_py):
self._log_debug(
f"Found sub-module: {os.path.relpath(package_module_py, self.imploder.root_dir)}"
)
self.imploder.resolve_file(
file_abs_path=package_module_py,
f_out=self.f_out,
is_dry_run=self.is_dry_run,
indent_level=self.imploder.current_indent_level,
)
elif os.path.isfile(package_module_init):
self._log_debug(
f"Found sub-package: {os.path.relpath(package_module_init, self.imploder.root_dir)}"
)
self.imploder.resolve_file(
file_abs_path=package_module_init,
f_out=self.f_out,
is_dry_run=self.is_dry_run,
indent_level=self.imploder.current_indent_level,
)
else:
self.logger.warning(
f"{self.indent} > Could not resolve sub-module '{alias.name}' in package '{module_path}'"
)
return replacement_node
if module_path:
self._log_debug(f"Resolving local from-import: `from {node.module or '.'} ...`")
self.imploder.resolve_file(
file_abs_path=module_path,
f_out=self.f_out,
is_dry_run=self.is_dry_run,
indent_level=self.imploder.current_indent_level,
)
else:
self._log_debug(f"Found external from-import: `from {node.module or '.'} ...`")
if self.is_dry_run:
key = ast.unparse(node)
if key not in self.imploder.external_imports:
self.imploder.external_imports[key] = node
return replacement_node
class Imploder:
"""
Core class for handling the implosion process.
Manages state, file processing, and the two-pass analysis.
"""
def __init__(self, root_dir: str, enable_import_logging: bool = False):
self.root_dir = os.path.realpath(root_dir)
self.processed_files: Set[str] = set()
self.external_imports: Dict[str, ast.AST] = {}
self.future_imports: Dict[str, ast.AST] = {}
self.current_indent_level = 0
self.enable_import_logging = enable_import_logging
logger.info(f"Initialized Imploder with root: {self.root_dir}")
def implode(self, main_file_abs_path: str, output_file_path: str):
"""
Runs the full two-pass implosion process.
"""
if not os.path.isfile(main_file_abs_path):
logger.critical(f"Main file not found: {main_file_abs_path}")
sys.exit(1)
logger.info(
f"--- PASS 1: Analyzing dependencies from {os.path.relpath(main_file_abs_path, self.root_dir)} ---"
)
self.processed_files.clear()
self.external_imports.clear()
self.future_imports.clear()
try:
self.resolve_file(main_file_abs_path, f_out=None, is_dry_run=True, indent_level=0)
except Exception as e:
logger.critical(f"Error during analysis pass: {e}", exc_info=True)
sys.exit(1)
logger.info(
f"--- Analysis complete. Found {len(self.future_imports)} __future__ imports and {len(self.external_imports)} external modules. ---"
)
logger.info(f"--- PASS 2: Writing imploded file to {output_file_path} ---")
self.processed_files.clear()
try:
with open(output_file_path, "w", encoding="utf-8") as f_out:
f_out.write(f"#!/usr/bin/env python3\n")
f_out.write(f"# -*- coding: utf-8 -*-\n")
f_out.write(f"import logging\n")
f_out.write(f"\n# --- IMPLODED FILE: Generated by impLODE --- #\n")
f_out.write(
f"# --- Original main file: {os.path.relpath(main_file_abs_path, self.root_dir)} --- #\n"
)
if self.future_imports:
f_out.write("\n# --- Hoisted __future__ Imports --- #\n")
for node in self.future_imports.values():
f_out.write(f"{ast.unparse(node)}\n")
f_out.write("# --- End __future__ Imports --- #\n")
enable_logging_str = "True" if self.enable_import_logging else "False"
f_out.write("\n# --- impLODE Helper Function --- #\n")
f_out.write(f"_IMPLODE_LOGGING_ENABLED_ = {enable_logging_str}\n")
f_out.write("def _implode_log_import(module_name, level, import_type):\n")
f_out.write(
' """Dummy function to replace imports and prevent IndentationErrors."""\n'
)
f_out.write(" if _IMPLODE_LOGGING_ENABLED_:\n")
f_out.write(
" print(f\"[impLODE Logger]: Skipped {import_type}: module='{module_name}', level={level}\")\n"
)
f_out.write(" pass\n")
f_out.write("# --- End Helper Function --- #\n")
if self.external_imports:
f_out.write("\n# --- Hoisted External Imports --- #\n")
for node in self.external_imports.values():
f_out.write("try:\n")
f_out.write(f" {ast.unparse(node)}\n")
f_out.write("except ImportError:\n")
f_out.write(" pass\n")
f_out.write("# --- End External Imports --- #\n")
self.resolve_file(main_file_abs_path, f_out=f_out, is_dry_run=False, indent_level=0)
except IOError as e:
logger.critical(
f"Could not write to output file {output_file_path}: {e}", exc_info=True
)
sys.exit(1)
except Exception as e:
logger.critical(f"Error during write pass: {e}", exc_info=True)
sys.exit(1)
logger.info(f"--- Implosion complete! Output saved to {output_file_path} ---")
def resolve_file(
self, file_abs_path: str, f_out: Optional[TextIO], is_dry_run: bool, indent_level: int = 0
):
"""
Recursively resolves a single file.
- `is_dry_run=True`: Analyzes imports, populating `external_imports`.
- `is_dry_run=False`: Writes transformed code to `f_out`.
"""
self.current_indent_level = indent_level
indent = " " * indent_level
try:
file_real_path = os.path.realpath(file_abs_path)
rel_path = os.path.relpath(file_real_path, self.root_dir)
except ValueError:
logger.warning(
f"{indent}Cannot calculate relative path for {file_abs_path}. Using absolute."
)
rel_path = file_abs_path
if file_real_path in self.processed_files:
logger.debug(f"{indent}Skipping already processed file: {rel_path}")
return
logger.info(f"{indent}Processing: {rel_path}")
self.processed_files.add(file_real_path)
try:
with open(file_real_path, "r", encoding="utf-8") as f:
code = f.read()
except FileNotFoundError:
logger.error(f"{indent}File not found: {file_real_path}")
return
except UnicodeDecodeError:
logger.error(f"{indent}Could not decode file (not utf-8): {file_real_path}")
return
except Exception as e:
logger.error(f"{indent}Could not read file {file_real_path}: {e}")
return
try:
py_compile.compile(file_real_path, doraise=True, quiet=1)
logger.debug(f"{indent}Syntax OK (py_compile): {rel_path}")
except py_compile.PyCompileError as e:
logger.error(
f"{indent}Syntax error (py_compile) in {e.file} on line {e.lineno}: {e.msg}"
)
return
except Exception as e:
logger.error(f"{indent}Error during py_compile for {rel_path}: {e}")
return
try:
tree = ast.parse(code, filename=file_real_path)
except SyntaxError as e:
logger.error(f"{indent}Syntax error in {rel_path} on line {e.lineno}: {e.msg}")
return
except Exception as e:
logger.error(f"{indent}Could not parse AST for {rel_path}: {e}")
return
transformer = ImportTransformer(
imploder=self,
current_file_path=file_real_path,
f_out=f_out,
is_dry_run=is_dry_run,
indent_level=indent_level,
)
try:
new_tree = transformer.visit(tree)
except Exception as e:
logger.error(f"{indent}Error transforming AST for {rel_path}: {e}", exc_info=True)
return
if not is_dry_run and f_out:
try:
ast.fix_missing_locations(new_tree)
f_out.write(f"\n\n# --- Content from {rel_path} --- #\n")
f_out.write(ast.unparse(new_tree))
f_out.write(f"\n# --- End of {rel_path} --- #\n")
logger.info(f"{indent}Successfully wrote content from: {rel_path}")
except Exception as e:
logger.error(
f"{indent}Could not unparse or write AST for {rel_path}: {e}", exc_info=True
)
self.current_indent_level = indent_level
def setup_logging(level: int):
"""Configures the root logger."""
handler = logging.StreamHandler()
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
handler.setFormatter(formatter)
logger.setLevel(level)
logger.addHandler(handler)
transformer_logger = logging.getLogger("ImportTransformer")
transformer_logger.setLevel(level)
transformer_logger.addHandler(handler)
def main():
"""Main entry point for the script."""
parser = argparse.ArgumentParser(
description="impLODE: Consolidate a multi-file Python project into one file.",
formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument(
"main_file", type=str, help="The main entry point .py file of your project."
)
parser.add_argument(
"-o",
"--output",
type=str,
default="imploded.py",
help="Path for the combined output file. (default: imploded.py)",
)
parser.add_argument(
"-r",
"--root",
type=str,
default=".",
help="The root directory of the project for resolving absolute imports. (default: current directory)",
)
log_group = parser.add_mutually_exclusive_group()
log_group.add_argument(
"-v",
"--verbose",
action="store_const",
dest="log_level",
const=logging.DEBUG,
default=logging.INFO,
help="Enable verbose DEBUG logging.",
)
log_group.add_argument(
"-q",
"--quiet",
action="store_const",
dest="log_level",
const=logging.WARNING,
help="Suppress INFO logs, showing only WARNINGS and ERRORS.",
)
parser.add_argument(
"--enable-import-logging",
action="store_true",
help="Enable runtime logging for removed import statements in the final imploded script.",
)
args = parser.parse_args()
setup_logging(args.log_level)
root_dir = os.path.abspath(args.root)
main_file_path = os.path.abspath(args.main_file)
output_file_path = os.path.abspath(args.output)
if not os.path.isdir(root_dir):
logger.critical(f"Root directory not found: {root_dir}")
sys.exit(1)
if not os.path.isfile(main_file_path):
logger.critical(f"Main file not found: {main_file_path}")
sys.exit(1)
if not main_file_path.startswith(root_dir):
logger.warning(f"Main file {main_file_path} is outside the specified root {root_dir}.")
logger.warning("This may cause issues with absolute import resolution.")
if main_file_path == output_file_path:
logger.critical("Output file cannot be the same as the main file.")
sys.exit(1)
imploder = Imploder(root_dir=root_dir, enable_import_logging=args.enable_import_logging)
imploder.implode(main_file_abs_path=main_file_path, output_file_path=output_file_path)
if __name__ == "__main__":
main()

View File

@ -1,4 +1,3 @@
#!/usr/bin/env python3
""" """
Advanced input handler for PR Assistant with editor mode, file inclusion, and image support. Advanced input handler for PR Assistant with editor mode, file inclusion, and image support.
""" """
@ -10,27 +9,24 @@ import readline
from pathlib import Path from pathlib import Path
from typing import Optional from typing import Optional
# from pr.ui.colors import Colors # Avoid import issues
class AdvancedInputHandler: class AdvancedInputHandler:
"""Handles advanced input with editor mode, file inclusion, and image support.""" """Handles advanced input with editor mode, file inclusion, and image support."""
def __init__(self): def __init__(self):
self.editor_mode = False self.editor_mode = False
self.setup_readline()
def setup_readline(self): def setup_readline(self):
"""Setup readline with basic completer.""" """Setup readline with basic completer."""
try: try:
# Simple completer that doesn't interfere
def completer(text, state): def completer(text, state):
return None return None
readline.set_completer(completer) readline.set_completer(completer)
readline.parse_and_bind("tab: complete") readline.parse_and_bind("tab: complete")
except: except:
pass # Readline not available pass
def toggle_editor_mode(self): def toggle_editor_mode(self):
"""Toggle between simple and editor input modes.""" """Toggle between simple and editor input modes."""
@ -54,19 +50,13 @@ class AdvancedInputHandler:
"""Get simple input with file completion.""" """Get simple input with file completion."""
try: try:
user_input = input(prompt).strip() user_input = input(prompt).strip()
if not user_input: if not user_input:
return "" return ""
# Check for special commands
if user_input.lower() == "/editor": if user_input.lower() == "/editor":
self.toggle_editor_mode() self.toggle_editor_mode()
return self.get_input(prompt) # Recurse to get new input return self.get_input(prompt)
# Process file inclusions and images
processed_input = self._process_input(user_input) processed_input = self._process_input(user_input)
return processed_input return processed_input
except KeyboardInterrupt: except KeyboardInterrupt:
return None return None
@ -75,7 +65,6 @@ class AdvancedInputHandler:
try: try:
print("Editor mode: Enter your message. Type 'END' on a new line to finish.") print("Editor mode: Enter your message. Type 'END' on a new line to finish.")
print("Type '/simple' to switch back to simple mode.") print("Type '/simple' to switch back to simple mode.")
lines = [] lines = []
while True: while True:
try: try:
@ -84,31 +73,22 @@ class AdvancedInputHandler:
break break
elif line.strip().lower() == "/simple": elif line.strip().lower() == "/simple":
self.toggle_editor_mode() self.toggle_editor_mode()
return self.get_input(prompt) # Switch back and get input return self.get_input(prompt)
lines.append(line) lines.append(line)
except EOFError: except EOFError:
break break
content = "\n".join(lines).strip() content = "\n".join(lines).strip()
if not content: if not content:
return "" return ""
# Process file inclusions and images
processed_content = self._process_input(content) processed_content = self._process_input(content)
return processed_content return processed_content
except KeyboardInterrupt: except KeyboardInterrupt:
return None return None
def _process_input(self, text: str) -> str: def _process_input(self, text: str) -> str:
"""Process input text for file inclusions and images.""" """Process input text for file inclusions and images."""
# Process @[filename] inclusions
text = self._process_file_inclusions(text) text = self._process_file_inclusions(text)
# Process image inclusions (look for image file paths)
text = self._process_image_inclusions(text) text = self._process_image_inclusions(text)
return text return text
def _process_file_inclusions(self, text: str) -> str: def _process_file_inclusions(self, text: str) -> str:
@ -127,41 +107,31 @@ class AdvancedInputHandler:
except Exception as e: except Exception as e:
return f"[Error reading file {filename}: {e}]" return f"[Error reading file {filename}: {e}]"
# Replace @[filename] patterns pattern = "@\\[([^\\]]+)\\]"
pattern = r"@\[([^\]]+)\]"
return re.sub(pattern, replace_file, text) return re.sub(pattern, replace_file, text)
def _process_image_inclusions(self, text: str) -> str: def _process_image_inclusions(self, text: str) -> str:
"""Process image file references and encode them.""" """Process image file references and encode them."""
# Find potential image file paths
words = text.split() words = text.split()
processed_parts = [] processed_parts = []
for word in words: for word in words:
# Check if it's a file path that exists and is an image
try: try:
path = Path(word.strip()).expanduser().resolve() path = Path(word.strip()).expanduser().resolve()
if path.exists() and path.is_file(): if path.exists() and path.is_file():
mime_type, _ = mimetypes.guess_type(str(path)) mime_type, _ = mimetypes.guess_type(str(path))
if mime_type and mime_type.startswith("image/"): if mime_type and mime_type.startswith("image/"):
# Encode image
with open(path, "rb") as f: with open(path, "rb") as f:
image_data = base64.b64encode(f.read()).decode("utf-8") image_data = base64.b64encode(f.read()).decode("utf-8")
# Replace with data URL
processed_parts.append( processed_parts.append(
f"[Image: {path.name}]\ndata:{mime_type};base64,{image_data}\n" f"[Image: {path.name}]\ndata:{mime_type};base64,{image_data}\n"
) )
continue continue
except: except:
pass pass
processed_parts.append(word) processed_parts.append(word)
return " ".join(processed_parts) return " ".join(processed_parts)
# Global instance
input_handler = AdvancedInputHandler() input_handler = AdvancedInputHandler()

View File

@ -5,6 +5,7 @@ from typing import Any, Dict, List, Optional
class ConversationMemory: class ConversationMemory:
def __init__(self, db_path: str): def __init__(self, db_path: str):
self.db_path = db_path self.db_path = db_path
self._initialize_memory() self._initialize_memory()
@ -12,58 +13,24 @@ class ConversationMemory:
def _initialize_memory(self): def _initialize_memory(self):
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute( cursor.execute(
""" "\n CREATE TABLE IF NOT EXISTS conversation_history (\n conversation_id TEXT PRIMARY KEY,\n session_id TEXT,\n started_at REAL NOT NULL,\n ended_at REAL,\n message_count INTEGER DEFAULT 0,\n summary TEXT,\n topics TEXT,\n metadata TEXT\n )\n "
CREATE TABLE IF NOT EXISTS conversation_history (
conversation_id TEXT PRIMARY KEY,
session_id TEXT,
started_at REAL NOT NULL,
ended_at REAL,
message_count INTEGER DEFAULT 0,
summary TEXT,
topics TEXT,
metadata TEXT
)
"""
)
cursor.execute(
"""
CREATE TABLE IF NOT EXISTS conversation_messages (
message_id TEXT PRIMARY KEY,
conversation_id TEXT NOT NULL,
role TEXT NOT NULL,
content TEXT NOT NULL,
timestamp REAL NOT NULL,
tool_calls TEXT,
metadata TEXT,
FOREIGN KEY (conversation_id) REFERENCES conversation_history(conversation_id)
)
"""
)
cursor.execute(
"""
CREATE INDEX IF NOT EXISTS idx_conv_session ON conversation_history(session_id)
"""
) )
cursor.execute( cursor.execute(
""" "\n CREATE TABLE IF NOT EXISTS conversation_messages (\n message_id TEXT PRIMARY KEY,\n conversation_id TEXT NOT NULL,\n role TEXT NOT NULL,\n content TEXT NOT NULL,\n timestamp REAL NOT NULL,\n tool_calls TEXT,\n metadata TEXT,\n FOREIGN KEY (conversation_id) REFERENCES conversation_history(conversation_id)\n )\n "
CREATE INDEX IF NOT EXISTS idx_conv_started ON conversation_history(started_at DESC)
"""
) )
cursor.execute( cursor.execute(
""" "\n CREATE INDEX IF NOT EXISTS idx_conv_session ON conversation_history(session_id)\n "
CREATE INDEX IF NOT EXISTS idx_msg_conversation ON conversation_messages(conversation_id)
"""
) )
cursor.execute( cursor.execute(
""" "\n CREATE INDEX IF NOT EXISTS idx_conv_started ON conversation_history(started_at DESC)\n "
CREATE INDEX IF NOT EXISTS idx_msg_timestamp ON conversation_messages(timestamp) )
""" cursor.execute(
"\n CREATE INDEX IF NOT EXISTS idx_msg_conversation ON conversation_messages(conversation_id)\n "
)
cursor.execute(
"\n CREATE INDEX IF NOT EXISTS idx_msg_timestamp ON conversation_messages(timestamp)\n "
) )
conn.commit() conn.commit()
conn.close() conn.close()
@ -75,21 +42,10 @@ class ConversationMemory:
): ):
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute( cursor.execute(
""" "\n INSERT INTO conversation_history\n (conversation_id, session_id, started_at, metadata)\n VALUES (?, ?, ?, ?)\n ",
INSERT INTO conversation_history (conversation_id, session_id, time.time(), json.dumps(metadata) if metadata else None),
(conversation_id, session_id, started_at, metadata)
VALUES (?, ?, ?, ?)
""",
(
conversation_id,
session_id,
time.time(),
json.dumps(metadata) if metadata else None,
),
) )
conn.commit() conn.commit()
conn.close() conn.close()
@ -104,13 +60,8 @@ class ConversationMemory:
): ):
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute( cursor.execute(
""" "\n INSERT INTO conversation_messages\n (message_id, conversation_id, role, content, timestamp, tool_calls, metadata)\n VALUES (?, ?, ?, ?, ?, ?, ?)\n ",
INSERT INTO conversation_messages
(message_id, conversation_id, role, content, timestamp, tool_calls, metadata)
VALUES (?, ?, ?, ?, ?, ?, ?)
""",
( (
message_id, message_id,
conversation_id, conversation_id,
@ -121,16 +72,10 @@ class ConversationMemory:
json.dumps(metadata) if metadata else None, json.dumps(metadata) if metadata else None,
), ),
) )
cursor.execute( cursor.execute(
""" "\n UPDATE conversation_history\n SET message_count = message_count + 1\n WHERE conversation_id = ?\n ",
UPDATE conversation_history
SET message_count = message_count + 1
WHERE conversation_id = ?
""",
(conversation_id,), (conversation_id,),
) )
conn.commit() conn.commit()
conn.close() conn.close()
@ -139,29 +84,16 @@ class ConversationMemory:
) -> List[Dict[str, Any]]: ) -> List[Dict[str, Any]]:
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
if limit: if limit:
cursor.execute( cursor.execute(
""" "\n SELECT message_id, role, content, timestamp, tool_calls, metadata\n FROM conversation_messages\n WHERE conversation_id = ?\n ORDER BY timestamp DESC\n LIMIT ?\n ",
SELECT message_id, role, content, timestamp, tool_calls, metadata
FROM conversation_messages
WHERE conversation_id = ?
ORDER BY timestamp DESC
LIMIT ?
""",
(conversation_id, limit), (conversation_id, limit),
) )
else: else:
cursor.execute( cursor.execute(
""" "\n SELECT message_id, role, content, timestamp, tool_calls, metadata\n FROM conversation_messages\n WHERE conversation_id = ?\n ORDER BY timestamp ASC\n ",
SELECT message_id, role, content, timestamp, tool_calls, metadata
FROM conversation_messages
WHERE conversation_id = ?
ORDER BY timestamp ASC
""",
(conversation_id,), (conversation_id,),
) )
messages = [] messages = []
for row in cursor.fetchall(): for row in cursor.fetchall():
messages.append( messages.append(
@ -174,7 +106,6 @@ class ConversationMemory:
"metadata": json.loads(row[5]) if row[5] else None, "metadata": json.loads(row[5]) if row[5] else None,
} }
) )
conn.close() conn.close()
return messages return messages
@ -183,41 +114,20 @@ class ConversationMemory:
): ):
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute( cursor.execute(
""" "\n UPDATE conversation_history\n SET summary = ?, topics = ?, ended_at = ?\n WHERE conversation_id = ?\n ",
UPDATE conversation_history (summary, json.dumps(topics) if topics else None, time.time(), conversation_id),
SET summary = ?, topics = ?, ended_at = ?
WHERE conversation_id = ?
""",
(
summary,
json.dumps(topics) if topics else None,
time.time(),
conversation_id,
),
) )
conn.commit() conn.commit()
conn.close() conn.close()
def search_conversations(self, query: str, limit: int = 10) -> List[Dict[str, Any]]: def search_conversations(self, query: str, limit: int = 10) -> List[Dict[str, Any]]:
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute( cursor.execute(
""" "\n SELECT DISTINCT h.conversation_id, h.session_id, h.started_at,\n h.message_count, h.summary, h.topics\n FROM conversation_history h\n LEFT JOIN conversation_messages m ON h.conversation_id = m.conversation_id\n WHERE h.summary LIKE ? OR h.topics LIKE ? OR m.content LIKE ?\n ORDER BY h.started_at DESC\n LIMIT ?\n ",
SELECT DISTINCT h.conversation_id, h.session_id, h.started_at,
h.message_count, h.summary, h.topics
FROM conversation_history h
LEFT JOIN conversation_messages m ON h.conversation_id = m.conversation_id
WHERE h.summary LIKE ? OR h.topics LIKE ? OR m.content LIKE ?
ORDER BY h.started_at DESC
LIMIT ?
""",
(f"%{query}%", f"%{query}%", f"%{query}%", limit), (f"%{query}%", f"%{query}%", f"%{query}%", limit),
) )
conversations = [] conversations = []
for row in cursor.fetchall(): for row in cursor.fetchall():
conversations.append( conversations.append(
@ -230,7 +140,6 @@ class ConversationMemory:
"topics": json.loads(row[5]) if row[5] else [], "topics": json.loads(row[5]) if row[5] else [],
} }
) )
conn.close() conn.close()
return conversations return conversations
@ -239,31 +148,16 @@ class ConversationMemory:
) -> List[Dict[str, Any]]: ) -> List[Dict[str, Any]]:
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
if session_id: if session_id:
cursor.execute( cursor.execute(
""" "\n SELECT conversation_id, session_id, started_at, ended_at,\n message_count, summary, topics\n FROM conversation_history\n WHERE session_id = ?\n ORDER BY started_at DESC\n LIMIT ?\n ",
SELECT conversation_id, session_id, started_at, ended_at,
message_count, summary, topics
FROM conversation_history
WHERE session_id = ?
ORDER BY started_at DESC
LIMIT ?
""",
(session_id, limit), (session_id, limit),
) )
else: else:
cursor.execute( cursor.execute(
""" "\n SELECT conversation_id, session_id, started_at, ended_at,\n message_count, summary, topics\n FROM conversation_history\n ORDER BY started_at DESC\n LIMIT ?\n ",
SELECT conversation_id, session_id, started_at, ended_at,
message_count, summary, topics
FROM conversation_history
ORDER BY started_at DESC
LIMIT ?
""",
(limit,), (limit,),
) )
conversations = [] conversations = []
for row in cursor.fetchall(): for row in cursor.fetchall():
conversations.append( conversations.append(
@ -277,51 +171,37 @@ class ConversationMemory:
"topics": json.loads(row[6]) if row[6] else [], "topics": json.loads(row[6]) if row[6] else [],
} }
) )
conn.close() conn.close()
return conversations return conversations
def delete_conversation(self, conversation_id: str) -> bool: def delete_conversation(self, conversation_id: str) -> bool:
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute( cursor.execute(
"DELETE FROM conversation_messages WHERE conversation_id = ?", "DELETE FROM conversation_messages WHERE conversation_id = ?", (conversation_id,)
(conversation_id,),
) )
cursor.execute( cursor.execute(
"DELETE FROM conversation_history WHERE conversation_id = ?", "DELETE FROM conversation_history WHERE conversation_id = ?", (conversation_id,)
(conversation_id,),
) )
deleted = cursor.rowcount > 0 deleted = cursor.rowcount > 0
conn.commit() conn.commit()
conn.close() conn.close()
return deleted return deleted
def get_statistics(self) -> Dict[str, Any]: def get_statistics(self) -> Dict[str, Any]:
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute("SELECT COUNT(*) FROM conversation_history") cursor.execute("SELECT COUNT(*) FROM conversation_history")
total_conversations = cursor.fetchone()[0] total_conversations = cursor.fetchone()[0]
cursor.execute("SELECT COUNT(*) FROM conversation_messages") cursor.execute("SELECT COUNT(*) FROM conversation_messages")
total_messages = cursor.fetchone()[0] total_messages = cursor.fetchone()[0]
cursor.execute("SELECT SUM(message_count) FROM conversation_history") cursor.execute("SELECT SUM(message_count) FROM conversation_history")
cursor.fetchone()[0] or 0 cursor.fetchone()[0] or 0
cursor.execute( cursor.execute(
""" "\n SELECT AVG(message_count) FROM conversation_history WHERE message_count > 0\n "
SELECT AVG(message_count) FROM conversation_history WHERE message_count > 0
"""
) )
avg_messages = cursor.fetchone()[0] or 0 avg_messages = cursor.fetchone()[0] or 0
conn.close() conn.close()
return { return {
"total_conversations": total_conversations, "total_conversations": total_conversations,
"total_messages": total_messages, "total_messages": total_messages,

View File

@ -4,18 +4,18 @@ from typing import Any, Dict, List
class FactExtractor: class FactExtractor:
def __init__(self): def __init__(self):
self.fact_patterns = [ self.fact_patterns = [
(r"([A-Z][a-z]+ [A-Z][a-z]+) is (a|an) ([^.]+)", "definition"), ("([A-Z][a-z]+ [A-Z][a-z]+) is (a|an) ([^.]+)", "definition"),
(r"([A-Z][a-z]+) (was|is) (born|created|founded) in (\d{4})", "temporal"), ("([A-Z][a-z]+) (was|is) (born|created|founded) in (\\d{4})", "temporal"),
(r"([A-Z][a-z]+) (invented|created|developed) ([^.]+)", "attribution"), ("([A-Z][a-z]+) (invented|created|developed) ([^.]+)", "attribution"),
(r"([^.]+) (costs?|worth) (\$[\d,]+)", "numeric"), ("([^.]+) (costs?|worth) (\\$[\\d,]+)", "numeric"),
(r"([A-Z][a-z]+) (lives?|works?|located) in ([A-Z][a-z]+)", "location"), ("([A-Z][a-z]+) (lives?|works?|located) in ([A-Z][a-z]+)", "location"),
] ]
def extract_facts(self, text: str) -> List[Dict[str, Any]]: def extract_facts(self, text: str) -> List[Dict[str, Any]]:
facts = [] facts = []
for pattern, fact_type in self.fact_patterns: for pattern, fact_type in self.fact_patterns:
matches = re.finditer(pattern, text) matches = re.finditer(pattern, text)
for match in matches: for match in matches:
@ -27,45 +27,37 @@ class FactExtractor:
"confidence": 0.7, "confidence": 0.7,
} }
) )
noun_phrases = self._extract_noun_phrases(text) noun_phrases = self._extract_noun_phrases(text)
for phrase in noun_phrases: for phrase in noun_phrases:
if len(phrase.split()) >= 2: if len(phrase.split()) >= 2:
facts.append( facts.append(
{ {"type": "entity", "text": phrase, "components": [phrase], "confidence": 0.5}
"type": "entity",
"text": phrase,
"components": [phrase],
"confidence": 0.5,
}
) )
return facts return facts
def _extract_noun_phrases(self, text: str) -> List[str]: def _extract_noun_phrases(self, text: str) -> List[str]:
sentences = re.split(r"[.!?]", text) sentences = re.split("[.!?]", text)
phrases = [] phrases = []
for sentence in sentences: for sentence in sentences:
words = sentence.split() words = sentence.split()
current_phrase = [] current_phrase = []
for word in words: for word in words:
if word and word[0].isupper() and len(word) > 1: if word and word[0].isupper() and (len(word) > 1):
current_phrase.append(word) current_phrase.append(word)
else: else:
if len(current_phrase) >= 2: if len(current_phrase) >= 2:
phrases.append(" ".join(current_phrase)) phrases.append(" ".join(current_phrase))
elif len(current_phrase) == 1:
phrases.append(current_phrase[0])
current_phrase = [] current_phrase = []
if len(current_phrase) >= 2: if len(current_phrase) >= 2:
phrases.append(" ".join(current_phrase)) phrases.append(" ".join(current_phrase))
elif len(current_phrase) == 1:
phrases.append(current_phrase[0])
return list(set(phrases)) return list(set(phrases))
def extract_key_terms(self, text: str, top_k: int = 10) -> List[tuple]: def extract_key_terms(self, text: str, top_k: int = 10) -> List[tuple]:
words = re.findall(r"\b[a-z]{4,}\b", text.lower()) words = re.findall("\\b[a-z]{4,}\\b", text.lower())
stopwords = { stopwords = {
"this", "this",
"that", "that",
@ -123,32 +115,21 @@ class FactExtractor:
"know", "know",
"like", "like",
} }
filtered_words = [w for w in words if w not in stopwords] filtered_words = [w for w in words if w not in stopwords]
word_freq = defaultdict(int) word_freq = defaultdict(int)
for word in filtered_words: for word in filtered_words:
word_freq[word] += 1 word_freq[word] += 1
sorted_terms = sorted(word_freq.items(), key=lambda x: x[1], reverse=True) sorted_terms = sorted(word_freq.items(), key=lambda x: x[1], reverse=True)
return sorted_terms[:top_k] return sorted_terms[:top_k]
def extract_relationships(self, text: str) -> List[Dict[str, Any]]: def extract_relationships(self, text: str) -> List[Dict[str, Any]]:
relationships = [] relationships = []
relationship_patterns = [ relationship_patterns = [
( ("([A-Z][a-z]+) (works for|employed by|member of) ([A-Z][a-z]+)", "employment"),
r"([A-Z][a-z]+) (works for|employed by|member of) ([A-Z][a-z]+)", ("([A-Z][a-z]+) (owns|has|possesses) ([^.]+)", "ownership"),
"employment", ("([A-Z][a-z]+) (located in|part of|belongs to) ([A-Z][a-z]+)", "location"),
), ("([A-Z][a-z]+) (uses|utilizes|implements) ([^.]+)", "usage"),
(r"([A-Z][a-z]+) (owns|has|possesses) ([^.]+)", "ownership"),
(
r"([A-Z][a-z]+) (located in|part of|belongs to) ([A-Z][a-z]+)",
"location",
),
(r"([A-Z][a-z]+) (uses|utilizes|implements) ([^.]+)", "usage"),
] ]
for pattern, rel_type in relationship_patterns: for pattern, rel_type in relationship_patterns:
matches = re.finditer(pattern, text) matches = re.finditer(pattern, text)
for match in matches: for match in matches:
@ -161,20 +142,19 @@ class FactExtractor:
"confidence": 0.6, "confidence": 0.6,
} }
) )
return relationships return relationships
def extract_metadata(self, text: str) -> Dict[str, Any]: def extract_metadata(self, text: str) -> Dict[str, Any]:
word_count = len(text.split()) word_count = len(text.split()) if text.strip() else 0
sentence_count = len(re.split(r"[.!?]", text)) sentences = re.split("[.!?]", text.strip())
sentence_count = len([s for s in sentences if s.strip()]) if text.strip() else 0
urls = re.findall(r"https?://[^\s]+", text) urls = re.findall("https?://[^\\s]+", text)
email_addresses = re.findall(r"\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b", text) email_addresses = re.findall("\\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Z|a-z]{2,}\\b", text)
dates = re.findall( dates = re.findall(
r"\b\d{1,2}[-/]\d{1,2}[-/]\d{2,4}\b|\b\d{4}[-/]\d{1,2}[-/]\d{1,2}\b", text "\\b\\d{1,2}[-/]\\d{1,2}[-/]\\d{2,4}\\b|\\b\\d{4}[-/]\\d{1,2}[-/]\\d{1,2}\\b|\\b\\d{4}\\b",
text,
) )
numbers = re.findall(r"\b\d+(?:,\d{3})*(?:\.\d+)?\b", text) numbers = re.findall("\\b\\d+(?:,\\d{3})*(?:\\.\\d+)?\\b", text)
return { return {
"word_count": word_count, "word_count": word_count,
"sentence_count": sentence_count, "sentence_count": sentence_count,
@ -183,13 +163,12 @@ class FactExtractor:
"email_addresses": email_addresses, "email_addresses": email_addresses,
"dates": dates, "dates": dates,
"numeric_values": numbers, "numeric_values": numbers,
"has_code": bool(re.search(r"```|def |class |import |function ", text)), "has_code": bool(re.search("```|def |class |import |function ", text)),
"has_questions": bool(re.search(r"\?", text)), "has_questions": bool(re.search("\\?", text)),
} }
def categorize_content(self, text: str) -> List[str]: def categorize_content(self, text: str) -> List[str]:
categories = [] categories = []
category_keywords = { category_keywords = {
"programming": [ "programming": [
"code", "code",
@ -200,23 +179,8 @@ class FactExtractor:
"software", "software",
"debug", "debug",
], ],
"data": [ "data": ["data", "database", "query", "table", "record", "statistics", "analysis"],
"data", "documentation": ["documentation", "guide", "tutorial", "manual", "readme", "explain"],
"database",
"query",
"table",
"record",
"statistics",
"analysis",
],
"documentation": [
"documentation",
"guide",
"tutorial",
"manual",
"readme",
"explain",
],
"configuration": [ "configuration": [
"config", "config",
"settings", "settings",
@ -225,35 +189,12 @@ class FactExtractor:
"install", "install",
"deployment", "deployment",
], ],
"testing": [ "testing": ["test", "testing", "validate", "verification", "quality", "assertion"],
"test", "research": ["research", "study", "analysis", "investigation", "findings", "results"],
"testing", "planning": ["plan", "planning", "schedule", "roadmap", "milestone", "timeline"],
"validate",
"verification",
"quality",
"assertion",
],
"research": [
"research",
"study",
"analysis",
"investigation",
"findings",
"results",
],
"planning": [
"plan",
"planning",
"schedule",
"roadmap",
"milestone",
"timeline",
],
} }
text_lower = text.lower() text_lower = text.lower()
for category, keywords in category_keywords.items(): for category, keywords in category_keywords.items():
if any(keyword in text_lower for keyword in keywords): if any((keyword in text_lower for keyword in keywords)):
categories.append(category) categories.append(category)
return categories if categories else ["general"] return categories if categories else ["general"]

View File

@ -0,0 +1,253 @@
import json
import sqlite3
import threading
import time
from dataclasses import dataclass
from typing import Any, Dict, List, Optional, Tuple
from .semantic_index import SemanticIndex
@dataclass
class KnowledgeEntry:
entry_id: str
category: str
content: str
metadata: Dict[str, Any]
created_at: float
updated_at: float
access_count: int = 0
importance_score: float = 1.0
def to_dict(self) -> Dict[str, Any]:
return {
"entry_id": self.entry_id,
"category": self.category,
"content": self.content,
"metadata": self.metadata,
"created_at": self.created_at,
"updated_at": self.updated_at,
"access_count": self.access_count,
"importance_score": self.importance_score,
}
class KnowledgeStore:
def __init__(self, db_path: str):
self.db_path = db_path
self.conn = sqlite3.connect(self.db_path, check_same_thread=False)
self.lock = threading.Lock()
self.semantic_index = SemanticIndex()
self._initialize_store()
self._load_index()
def _initialize_store(self):
with self.lock:
cursor = self.conn.cursor()
cursor.execute(
"\n CREATE TABLE IF NOT EXISTS knowledge_entries (\n entry_id TEXT PRIMARY KEY,\n category TEXT NOT NULL,\n content TEXT NOT NULL,\n metadata TEXT,\n created_at REAL NOT NULL,\n updated_at REAL NOT NULL,\n access_count INTEGER DEFAULT 0,\n importance_score REAL DEFAULT 1.0\n )\n "
)
cursor.execute(
"\n CREATE INDEX IF NOT EXISTS idx_category ON knowledge_entries(category)\n "
)
cursor.execute(
"\n CREATE INDEX IF NOT EXISTS idx_importance ON knowledge_entries(importance_score DESC)\n "
)
cursor.execute(
"\n CREATE INDEX IF NOT EXISTS idx_created ON knowledge_entries(created_at DESC)\n "
)
self.conn.commit()
def _load_index(self):
with self.lock:
cursor = self.conn.cursor()
cursor.execute("SELECT entry_id, content FROM knowledge_entries")
for row in cursor.fetchall():
self.semantic_index.add_document(row[0], row[1])
def add_entry(self, entry: KnowledgeEntry):
with self.lock:
cursor = self.conn.cursor()
cursor.execute(
"\n INSERT OR REPLACE INTO knowledge_entries\n (entry_id, category, content, metadata, created_at, updated_at, access_count, importance_score)\n VALUES (?, ?, ?, ?, ?, ?, ?, ?)\n ",
(
entry.entry_id,
entry.category,
entry.content,
json.dumps(entry.metadata),
entry.created_at,
entry.updated_at,
entry.access_count,
entry.importance_score,
),
)
self.conn.commit()
self.semantic_index.add_document(entry.entry_id, entry.content)
def get_entry(self, entry_id: str) -> Optional[KnowledgeEntry]:
with self.lock:
cursor = self.conn.cursor()
cursor.execute(
"\n SELECT entry_id, category, content, metadata, created_at, updated_at, access_count, importance_score\n FROM knowledge_entries\n WHERE entry_id = ?\n ",
(entry_id,),
)
row = cursor.fetchone()
if row:
cursor.execute(
"\n UPDATE knowledge_entries\n SET access_count = access_count + 1\n WHERE entry_id = ?\n ",
(entry_id,),
)
self.conn.commit()
return KnowledgeEntry(
entry_id=row[0],
category=row[1],
content=row[2],
metadata=json.loads(row[3]) if row[3] else {},
created_at=row[4],
updated_at=row[5],
access_count=row[6] + 1,
importance_score=row[7],
)
return None
def search_entries(
self, query: str, category: Optional[str] = None, top_k: int = 5
) -> List[KnowledgeEntry]:
semantic_results = self.semantic_index.search(query, top_k * 2)
fts_results = self._fts_search(query, top_k * 2)
combined_results = {}
for entry_id, score in semantic_results:
combined_results[entry_id] = score * 0.7
for entry_id, score in fts_results:
if entry_id in combined_results:
combined_results[entry_id] = max(combined_results[entry_id], score * 1.0)
else:
combined_results[entry_id] = score * 1.0
sorted_results = sorted(combined_results.items(), key=lambda x: x[1], reverse=True)
with self.lock:
cursor = self.conn.cursor()
entries = []
for entry_id, score in sorted_results[:top_k]:
if category:
cursor.execute(
"\n SELECT entry_id, category, content, metadata, created_at, updated_at, access_count, importance_score\n FROM knowledge_entries\n WHERE entry_id = ? AND category = ?\n ",
(entry_id, category),
)
else:
cursor.execute(
"\n SELECT entry_id, category, content, metadata, created_at, updated_at, access_count, importance_score\n FROM knowledge_entries\n WHERE entry_id = ?\n ",
(entry_id,),
)
row = cursor.fetchone()
if row:
entry = KnowledgeEntry(
entry_id=row[0],
category=row[1],
content=row[2],
metadata=json.loads(row[3]) if row[3] else {},
created_at=row[4],
updated_at=row[5],
access_count=row[6],
importance_score=row[7],
)
entry.metadata["search_score"] = score
entries.append(entry)
return entries
def _fts_search(self, query: str, top_k: int = 10) -> List[Tuple[str, float]]:
"""Full Text Search with exact word and partial sentence matching."""
with self.lock:
cursor = self.conn.cursor()
query_lower = query.lower()
query_words = query_lower.split()
cursor.execute(
"\n SELECT entry_id, content\n FROM knowledge_entries\n WHERE LOWER(content) LIKE ?\n ",
(f"%{query_lower}%",),
)
exact_matches = []
partial_matches = []
for row in cursor.fetchall():
entry_id, content = row
content_lower = content.lower()
if query_lower in content_lower:
exact_matches.append((entry_id, 1.0))
continue
content_words = set(content_lower.split())
query_word_set = set(query_words)
matching_words = len(query_word_set & content_words)
if matching_words > 0:
word_overlap_score = matching_words / len(query_word_set)
consecutive_bonus = 0.0
for i in range(len(query_words)):
for j in range(i + 1, min(i + 4, len(query_words) + 1)):
phrase = " ".join(query_words[i:j])
if phrase in content_lower:
consecutive_bonus += 0.2 * (j - i)
total_score = min(0.99, word_overlap_score + consecutive_bonus)
partial_matches.append((entry_id, total_score))
all_results = exact_matches + partial_matches
all_results.sort(key=lambda x: x[1], reverse=True)
return all_results[:top_k]
def get_by_category(self, category: str, limit: int = 20) -> List[KnowledgeEntry]:
with self.lock:
cursor = self.conn.cursor()
cursor.execute(
"\n SELECT entry_id, category, content, metadata, created_at, updated_at, access_count, importance_score\n FROM knowledge_entries\n WHERE category = ?\n ORDER BY importance_score DESC, created_at DESC\n LIMIT ?\n ",
(category, limit),
)
entries = []
for row in cursor.fetchall():
entries.append(
KnowledgeEntry(
entry_id=row[0],
category=row[1],
content=row[2],
metadata=json.loads(row[3]) if row[3] else {},
created_at=row[4],
updated_at=row[5],
access_count=row[6],
importance_score=row[7],
)
)
return entries
def update_importance(self, entry_id: str, importance_score: float):
with self.lock:
cursor = self.conn.cursor()
cursor.execute(
"\n UPDATE knowledge_entries\n SET importance_score = ?, updated_at = ?\n WHERE entry_id = ?\n ",
(importance_score, time.time(), entry_id),
)
self.conn.commit()
def delete_entry(self, entry_id: str) -> bool:
with self.lock:
cursor = self.conn.cursor()
cursor.execute("DELETE FROM knowledge_entries WHERE entry_id = ?", (entry_id,))
deleted = cursor.rowcount > 0
self.conn.commit()
if deleted:
self.semantic_index.remove_document(entry_id)
return deleted
def get_statistics(self) -> Dict[str, Any]:
with self.lock:
cursor = self.conn.cursor()
cursor.execute("SELECT COUNT(*) FROM knowledge_entries")
total_entries = cursor.fetchone()[0]
cursor.execute("SELECT COUNT(DISTINCT category) FROM knowledge_entries")
total_categories = cursor.fetchone()[0]
cursor.execute(
"\n SELECT category, COUNT(*) as count\n FROM knowledge_entries\n GROUP BY category\n ORDER BY count DESC\n "
)
category_counts = {row[0]: row[1] for row in cursor.fetchall()}
cursor.execute("SELECT SUM(access_count) FROM knowledge_entries")
total_accesses = cursor.fetchone()[0] or 0
return {
"total_entries": total_entries,
"total_categories": total_categories,
"category_distribution": category_counts,
"total_accesses": total_accesses,
"vocabulary_size": len(self.semantic_index.vocabulary),
}

View File

@ -5,15 +5,16 @@ from typing import Dict, List, Set, Tuple
class SemanticIndex: class SemanticIndex:
def __init__(self): def __init__(self):
self.documents: Dict[str, str] = {} self.documents: Dict[str, str] = {}
self.vocabulary: Set[str] = set() self.vocabulary: Set[str] = set()
self.idf_scores: Dict[str, float] = {} self.idf_scores: Dict[str, float] = {}
self.doc_vectors: Dict[str, Dict[str, float]] = {} self.doc_tf_scores: Dict[str, Dict[str, float]] = {}
def _tokenize(self, text: str) -> List[str]: def _tokenize(self, text: str) -> List[str]:
text = text.lower() text = text.lower()
text = re.sub(r"[^a-z0-9\s]", " ", text) text = re.sub("[^a-z0-9\\s]", " ", text)
tokens = text.split() tokens = text.split()
return tokens return tokens
@ -26,14 +27,11 @@ class SemanticIndex:
doc_count = len(self.documents) doc_count = len(self.documents)
if doc_count == 0: if doc_count == 0:
return return
token_doc_count = defaultdict(int) token_doc_count = defaultdict(int)
for doc_id, doc_text in self.documents.items(): for doc_id, doc_text in self.documents.items():
tokens = set(self._tokenize(doc_text)) tokens = set(self._tokenize(doc_text))
for token in tokens: for token in tokens:
token_doc_count[token] += 1 token_doc_count[token] += 1
if doc_count == 1: if doc_count == 1:
self.idf_scores = {token: 1.0 for token in token_doc_count} self.idf_scores = {token: 1.0 for token in token_doc_count}
else: else:
@ -45,43 +43,43 @@ class SemanticIndex:
self.documents[doc_id] = text self.documents[doc_id] = text
tokens = self._tokenize(text) tokens = self._tokenize(text)
self.vocabulary.update(tokens) self.vocabulary.update(tokens)
self._compute_idf()
tf_scores = self._compute_tf(tokens) tf_scores = self._compute_tf(tokens)
self.doc_vectors[doc_id] = { self.doc_tf_scores[doc_id] = tf_scores
token: tf_scores.get(token, 0) * self.idf_scores.get(token, 0) for token in tokens self._compute_idf()
}
def remove_document(self, doc_id: str): def remove_document(self, doc_id: str):
if doc_id in self.documents: if doc_id in self.documents:
del self.documents[doc_id] del self.documents[doc_id]
if doc_id in self.doc_vectors: if doc_id in self.doc_tf_scores:
del self.doc_vectors[doc_id] del self.doc_tf_scores[doc_id]
self._compute_idf() self._compute_idf()
def search(self, query: str, top_k: int = 5) -> List[Tuple[str, float]]: def search(self, query: str, top_k: int = 5) -> List[Tuple[str, float]]:
if not query.strip():
return []
query_tokens = self._tokenize(query) query_tokens = self._tokenize(query)
if not query_tokens:
return []
query_tf = self._compute_tf(query_tokens) query_tf = self._compute_tf(query_tokens)
query_vector = { query_vector = {
token: query_tf.get(token, 0) * self.idf_scores.get(token, 0) for token in query_tokens token: query_tf.get(token, 0) * self.idf_scores.get(token, 0) for token in query_tokens
} }
scores = [] scores = []
for doc_id, doc_vector in self.doc_vectors.items(): for doc_id, doc_tf in self.doc_tf_scores.items():
doc_vector = {
token: doc_tf.get(token, 0) * self.idf_scores.get(token, 0) for token in doc_tf
}
similarity = self._cosine_similarity(query_vector, doc_vector) similarity = self._cosine_similarity(query_vector, doc_vector)
scores.append((doc_id, similarity)) scores.append((doc_id, similarity))
scores.sort(key=lambda x: x[1], reverse=True) scores.sort(key=lambda x: x[1], reverse=True)
return scores[:top_k] return scores[:top_k]
def _cosine_similarity(self, vec1: Dict[str, float], vec2: Dict[str, float]) -> float: def _cosine_similarity(self, vec1: Dict[str, float], vec2: Dict[str, float]) -> float:
dot_product = sum( dot_product = sum(
vec1.get(token, 0) * vec2.get(token, 0) for token in set(vec1) | set(vec2) (vec1.get(token, 0) * vec2.get(token, 0) for token in set(vec1) | set(vec2))
) )
norm1 = math.sqrt(sum(val**2 for val in vec1.values())) norm1 = math.sqrt(sum((val**2 for val in vec1.values())))
norm2 = math.sqrt(sum(val**2 for val in vec2.values())) norm2 = math.sqrt(sum((val**2 for val in vec2.values())))
if norm1 == 0 or norm2 == 0: if norm1 == 0 or norm2 == 0:
return 0 return 0
return dot_product / (norm1 * norm2) return dot_product / (norm1 * norm2)

340
rp/multiplexer.py Normal file
View File

@ -0,0 +1,340 @@
import queue
import subprocess
import sys
import threading
import time
from rp.tools.process_handlers import detect_process_type, get_handler_for_process
from rp.tools.prompt_detection import get_global_detector
from rp.ui import Colors
class TerminalMultiplexer:
def __init__(self, name, show_output=True):
self.name = name
self.show_output = show_output
self.stdout_buffer = []
self.stderr_buffer = []
self.stdout_queue = queue.Queue()
self.stderr_queue = queue.Queue()
self.active = True
self.lock = threading.Lock()
self.metadata = {
"start_time": time.time(),
"last_activity": time.time(),
"interaction_count": 0,
"process_type": "unknown",
"state": "active",
}
self.handler = None
self.prompt_detector = get_global_detector()
if self.show_output:
self.display_thread = threading.Thread(target=self._display_worker, daemon=True)
self.display_thread.start()
def _display_worker(self):
while self.active:
try:
line = self.stdout_queue.get(timeout=0.1)
if line:
sys.stdout.write(line)
sys.stdout.flush()
except queue.Empty:
pass
try:
line = self.stderr_queue.get(timeout=0.1)
if line:
if self.metadata.get("process_type") in ["vim", "ssh"]:
sys.stderr.write(line)
else:
sys.stderr.write(f"{Colors.YELLOW}[{self.name} err]{Colors.RESET} {line}\n")
sys.stderr.flush()
except queue.Empty:
pass
def write_stdout(self, data):
with self.lock:
self.stdout_buffer.append(data)
self.metadata["last_activity"] = time.time()
if self.handler:
self.handler.update_state(data)
self.prompt_detector.update_session_state(
self.name, data, self.metadata["process_type"]
)
if self.show_output:
self.stdout_queue.put(data)
def write_stderr(self, data):
with self.lock:
self.stderr_buffer.append(data)
self.metadata["last_activity"] = time.time()
if self.handler:
self.handler.update_state(data)
self.prompt_detector.update_session_state(
self.name, data, self.metadata["process_type"]
)
if self.show_output:
self.stderr_queue.put(data)
def get_stdout(self):
with self.lock:
return "".join(self.stdout_buffer)
def get_stderr(self):
with self.lock:
return "".join(self.stderr_buffer)
def get_all_output(self):
with self.lock:
return {"stdout": "".join(self.stdout_buffer), "stderr": "".join(self.stderr_buffer)}
def get_metadata(self):
with self.lock:
return self.metadata.copy()
def update_metadata(self, key, value):
with self.lock:
self.metadata[key] = value
def set_process_type(self, process_type):
with self.lock:
self.metadata["process_type"] = process_type
self.handler = get_handler_for_process(process_type, self)
def send_input(self, input_data):
if hasattr(self, "process") and self.process.poll() is None:
try:
self.process.stdin.write(input_data + "\n")
self.process.stdin.flush()
with self.lock:
self.metadata["last_activity"] = time.time()
self.metadata["interaction_count"] += 1
except Exception as e:
self.write_stderr(f"Error sending input: {e}")
else:
with self.lock:
self.metadata["last_activity"] = time.time()
self.metadata["interaction_count"] += 1
def close(self):
self.active = False
if hasattr(self, "display_thread"):
self.display_thread.join(timeout=1)
multiplexer_registry = {}
multiplexer_counter = 0
multiplexer_lock = threading.Lock()
background_monitor = None
monitor_active = False
monitor_interval = 0.2
def create_multiplexer(name=None, show_output=True):
global multiplexer_counter
with multiplexer_lock:
if name is None:
multiplexer_counter += 1
name = f"process-{multiplexer_counter}"
multiplexer_instance = TerminalMultiplexer(name, show_output)
multiplexer_registry[name] = multiplexer_instance
return (name, multiplexer_instance)
def get_multiplexer(name):
return multiplexer_registry.get(name)
def close_multiplexer(name):
multiplexer_instance = multiplexer_registry.get(name)
if multiplexer_instance:
multiplexer_instance.close()
del multiplexer_registry[name]
def get_all_multiplexer_states():
with multiplexer_lock:
states = {}
for name, multiplexer_instance in multiplexer_registry.items():
states[name] = {
"metadata": multiplexer_instance.get_metadata(),
"output_summary": {
"stdout_lines": len(multiplexer_instance.stdout_buffer),
"stderr_lines": len(multiplexer_instance.stderr_buffer),
},
}
return states
def cleanup_all_multiplexers():
for multiplexer_instance in list(multiplexer_registry.values()):
multiplexer_instance.close()
multiplexer_registry.clear()
background_processes = {}
process_lock = threading.Lock()
class BackgroundProcess:
def __init__(self, name, command):
self.name = name
self.command = command
self.process = None
self.multiplexer = None
self.status = "starting"
self.start_time = time.time()
self.end_time = None
def start(self):
try:
multiplexer_name, multiplexer_instance = create_multiplexer(
self.name, show_output=False
)
self.multiplexer = multiplexer_instance
process_type = detect_process_type(self.command)
multiplexer_instance.set_process_type(process_type)
self.process = subprocess.Popen(
self.command,
shell=True,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
bufsize=1,
universal_newlines=True,
)
self.status = "running"
threading.Thread(target=self._monitor_stdout, daemon=True).start()
threading.Thread(target=self._monitor_stderr, daemon=True).start()
return {"status": "success", "pid": self.process.pid}
except Exception as e:
self.status = "error"
return {"status": "error", "error": str(e)}
def _monitor_stdout(self):
try:
for line in iter(self.process.stdout.readline, ""):
if line:
self.multiplexer.write_stdout(line.rstrip("\n\r"))
except Exception as e:
self.multiplexer.write_stderr(f"Error reading stdout: {e}")
finally:
self._check_completion()
def _monitor_stderr(self):
try:
for line in iter(self.process.stderr.readline, ""):
if line:
self.multiplexer.write_stderr(line.rstrip("\n\r"))
except Exception as e:
self.multiplexer.write_stderr(f"Error reading stderr: {e}")
def _check_completion(self):
if self.process and self.process.poll() is not None:
self.status = "completed"
self.end_time = time.time()
def get_info(self):
self._check_completion()
return {
"name": self.name,
"command": self.command,
"status": self.status,
"pid": self.process.pid if self.process else None,
"start_time": self.start_time,
"end_time": self.end_time,
"runtime": (
time.time() - self.start_time
if not self.end_time
else self.end_time - self.start_time
),
}
def get_output(self, lines=None):
if not self.multiplexer:
return []
all_output = self.multiplexer.get_all_output()
stdout_lines = all_output["stdout"].split("\n") if all_output["stdout"] else []
stderr_lines = all_output["stderr"].split("\n") if all_output["stderr"] else []
combined = stdout_lines + stderr_lines
if lines:
combined = combined[-lines:]
return [line for line in combined if line.strip()]
def send_input(self, input_text):
if self.process and self.status == "running":
try:
self.process.stdin.write(input_text + "\n")
self.process.stdin.flush()
return {"status": "success"}
except Exception as e:
return {"status": "error", "error": str(e)}
return {"status": "error", "error": "Process not running or no stdin"}
def kill(self):
if self.process and self.status == "running":
try:
self.process.terminate()
time.sleep(0.1)
if self.process.poll() is None:
self.process.kill()
self.status = "killed"
self.end_time = time.time()
return {"status": "success"}
except Exception as e:
return {"status": "error", "error": str(e)}
return {"status": "error", "error": "Process not running"}
def start_background_process(name, command):
with process_lock:
if name in background_processes:
return {"status": "error", "error": f"Process {name} already exists"}
process_instance = BackgroundProcess(name, command)
result = process_instance.start()
if result["status"] == "success":
background_processes[name] = process_instance
return result
def get_all_sessions():
with process_lock:
sessions = {}
for name, process_instance in background_processes.items():
sessions[name] = process_instance.get_info()
return sessions
def get_session_info(name):
with process_lock:
process_instance = background_processes.get(name)
return process_instance.get_info() if process_instance else None
def get_session_output(name, lines=None):
with process_lock:
process_instance = background_processes.get(name)
return process_instance.get_output(lines) if process_instance else None
def send_input_to_session(name, input_text):
with process_lock:
process_instance = background_processes.get(name)
return (
process_instance.send_input(input_text)
if process_instance
else {"status": "error", "error": "Session not found"}
)
def kill_session(name):
with process_lock:
process_instance = background_processes.get(name)
if process_instance:
result = process_instance.kill()
if result["status"] == "success":
del background_processes[name]
return result
return {"status": "error", "error": "Session not found"}

View File

@ -4,9 +4,9 @@ import sys
import threading import threading
import time import time
from pr.tools.process_handlers import detect_process_type, get_handler_for_process from rp.tools.process_handlers import detect_process_type, get_handler_for_process
from pr.tools.prompt_detection import get_global_detector from rp.tools.prompt_detection import get_global_detector
from pr.ui import Colors from rp.ui import Colors
class TerminalMultiplexer: class TerminalMultiplexer:
@ -38,7 +38,10 @@ class TerminalMultiplexer:
try: try:
line = self.stdout_queue.get(timeout=0.1) line = self.stdout_queue.get(timeout=0.1)
if line: if line:
sys.stdout.write(f"{Colors.GRAY}[{self.name}]{Colors.RESET} {line}") if self.metadata.get("process_type") in ["vim", "ssh"]:
sys.stdout.write(line)
else:
sys.stdout.write(f"{Colors.GRAY}[{self.name}]{Colors.RESET} {line}\n")
sys.stdout.flush() sys.stdout.flush()
except queue.Empty: except queue.Empty:
pass pass
@ -46,7 +49,10 @@ class TerminalMultiplexer:
try: try:
line = self.stderr_queue.get(timeout=0.1) line = self.stderr_queue.get(timeout=0.1)
if line: if line:
sys.stderr.write(f"{Colors.YELLOW}[{self.name} err]{Colors.RESET} {line}") if self.metadata.get("process_type") in ["vim", "ssh"]:
sys.stderr.write(line)
else:
sys.stderr.write(f"{Colors.YELLOW}[{self.name} err]{Colors.RESET} {line}\n")
sys.stderr.flush() sys.stderr.flush()
except queue.Empty: except queue.Empty:
pass pass

View File

@ -2,12 +2,10 @@ import importlib.util
import os import os
import sys import sys
from typing import Callable, Dict, List from typing import Callable, Dict, List
from rp.core.logging import get_logger
from pr.core.logging import get_logger
logger = get_logger("plugins") logger = get_logger("plugins")
PLUGINS_DIR = os.path.expanduser("~/.rp/plugins")
PLUGINS_DIR = os.path.expanduser("~/.pr/plugins")
class PluginLoader: class PluginLoader:
@ -21,30 +19,24 @@ class PluginLoader:
if not os.path.exists(PLUGINS_DIR): if not os.path.exists(PLUGINS_DIR):
logger.info("No plugins directory found") logger.info("No plugins directory found")
return [] return []
plugin_files = [f for f in os.listdir(PLUGINS_DIR) if f.endswith(".py")] plugin_files = [f for f in os.listdir(PLUGINS_DIR) if f.endswith(".py")]
for plugin_file in plugin_files: for plugin_file in plugin_files:
try: try:
self._load_plugin_file(plugin_file) self._load_plugin_file(plugin_file)
except Exception as e: except Exception as e:
logger.error(f"Error loading plugin {plugin_file}: {e}") logger.error(f"Error loading plugin {plugin_file}: {e}")
return self.plugin_tools return self.plugin_tools
def _load_plugin_file(self, filename: str): def _load_plugin_file(self, filename: str):
plugin_path = os.path.join(PLUGINS_DIR, filename) plugin_path = os.path.join(PLUGINS_DIR, filename)
plugin_name = filename[:-3] plugin_name = filename[:-3]
spec = importlib.util.spec_from_file_location(plugin_name, plugin_path) spec = importlib.util.spec_from_file_location(plugin_name, plugin_path)
if spec is None or spec.loader is None: if spec is None or spec.loader is None:
logger.error(f"Could not load spec for {filename}") logger.error(f"Could not load spec for {filename}")
return return
module = importlib.util.module_from_spec(spec) module = importlib.util.module_from_spec(spec)
sys.modules[plugin_name] = module sys.modules[plugin_name] = module
spec.loader.exec_module(module) spec.loader.exec_module(module)
if hasattr(module, "register_tools"): if hasattr(module, "register_tools"):
tools = module.register_tools() tools = module.register_tools()
if isinstance(tools, list): if isinstance(tools, list):
@ -60,7 +52,6 @@ class PluginLoader:
for plugin_name, module in self.loaded_plugins.items(): for plugin_name, module in self.loaded_plugins.items():
if hasattr(module, tool_name): if hasattr(module, tool_name):
return getattr(module, tool_name) return getattr(module, tool_name)
raise ValueError(f"Plugin function not found: {tool_name}") raise ValueError(f"Plugin function not found: {tool_name}")
def list_loaded_plugins(self) -> List[str]: def list_loaded_plugins(self) -> List[str]:
@ -69,57 +60,9 @@ class PluginLoader:
def create_example_plugin(): def create_example_plugin():
example_plugin = os.path.join(PLUGINS_DIR, "example_plugin.py") example_plugin = os.path.join(PLUGINS_DIR, "example_plugin.py")
if os.path.exists(example_plugin): if os.path.exists(example_plugin):
return return
example_code = '"""\nExample plugin for PR Assistant\n\nThis plugin demonstrates how to create custom tools.\n"""\n\ndef my_custom_tool(argument: str) -> str:\n """\n A custom tool that does something useful.\n\n Args:\n argument: Some input\n\n Returns:\n A result string\n """\n return f"Custom tool processed: {argument}"\n\n\ndef register_tools():\n """\n Register tools with the PR assistant.\n\n Returns:\n List of tool definitions\n """\n return [\n {\n "type": "function",\n "function": {\n "name": "my_custom_tool",\n "description": "A custom tool that processes input",\n "parameters": {\n "type": "object",\n "properties": {\n "argument": {\n "type": "string",\n "description": "The input to process"\n }\n },\n "required": ["argument"]\n }\n }\n }\n ]\n'
example_code = '''"""
Example plugin for PR Assistant
This plugin demonstrates how to create custom tools.
"""
def my_custom_tool(argument: str) -> str:
"""
A custom tool that does something useful.
Args:
argument: Some input
Returns:
A result string
"""
return f"Custom tool processed: {argument}"
def register_tools():
"""
Register tools with the PR assistant.
Returns:
List of tool definitions
"""
return [
{
"type": "function",
"function": {
"name": "my_custom_tool",
"description": "A custom tool that processes input",
"parameters": {
"type": "object",
"properties": {
"argument": {
"type": "string",
"description": "The input to process"
}
},
"required": ["argument"]
}
}
}
]
'''
try: try:
os.makedirs(PLUGINS_DIR, exist_ok=True) os.makedirs(PLUGINS_DIR, exist_ok=True)
with open(example_plugin, "w") as f: with open(example_plugin, "w") as f:

13
rp/rp.py Executable file
View File

@ -0,0 +1,13 @@
#!/usr/bin/env python3
# Trigger build
import sys
import os
# Add current directory to path to ensure imports work
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from rp.__main__ import main
if __name__ == "__main__":
main()

View File

@ -1,26 +1,22 @@
from pr.tools.agents import ( from rp.tools.agents import (
collaborate_agents, collaborate_agents,
create_agent, create_agent,
execute_agent_task, execute_agent_task,
list_agents, list_agents,
remove_agent, remove_agent,
) )
from pr.tools.base import get_tools_definition from rp.tools.base import get_tools_definition
from pr.tools.command import ( from rp.tools.vision import post_image
kill_process, from rp.tools.command import kill_process, run_command, run_command_interactive, tail_process
run_command, from rp.tools.database import db_get, db_query, db_set
run_command_interactive, from rp.tools.editor import (
tail_process,
)
from pr.tools.database import db_get, db_query, db_set
from pr.tools.editor import (
close_editor, close_editor,
editor_insert_text, editor_insert_text,
editor_replace_text, editor_replace_text,
editor_search, editor_search,
open_editor, open_editor,
) )
from pr.tools.filesystem import ( from rp.tools.filesystem import (
chdir, chdir,
getpwd, getpwd,
index_source_directory, index_source_directory,
@ -30,7 +26,8 @@ from pr.tools.filesystem import (
search_replace, search_replace,
write_file, write_file,
) )
from pr.tools.memory import ( from rp.tools.lsp import get_diagnostics
from rp.tools.memory import (
add_knowledge_entry, add_knowledge_entry,
delete_knowledge_entry, delete_knowledge_entry,
get_knowledge_by_category, get_knowledge_by_category,
@ -39,48 +36,71 @@ from pr.tools.memory import (
search_knowledge, search_knowledge,
update_knowledge_importance, update_knowledge_importance,
) )
from pr.tools.patch import apply_patch, create_diff from rp.tools.patch import apply_patch, create_diff
from pr.tools.python_exec import python_exec from rp.tools.python_exec import python_exec
from pr.tools.web import http_fetch, web_search, web_search_news from rp.tools.search import glob_files, grep
from rp.tools.web import http_fetch, web_search, web_search_news
# Aliases for user-requested tool names
view = read_file
write = write_file
edit = search_replace
patch = apply_patch
glob = glob_files
ls = list_directory
diagnostics = get_diagnostics
bash = run_command
agent = execute_agent_task
__all__ = [ __all__ = [
"get_tools_definition", "add_knowledge_entry",
"read_file", "apply_patch",
"write_file", "bash",
"list_directory",
"mkdir",
"chdir", "chdir",
"getpwd", "close_editor",
"index_source_directory", "collaborate_agents",
"search_replace", "create_agent",
"open_editor", "create_diff",
"db_get",
"db_query",
"db_set",
"delete_knowledge_entry",
"diagnostics",
"post_image",
"edit",
"editor_insert_text", "editor_insert_text",
"editor_replace_text", "editor_replace_text",
"editor_search", "editor_search",
"close_editor", "execute_agent_task",
"get_knowledge_by_category",
"get_knowledge_entry",
"get_knowledge_statistics",
"get_tools_definition",
"getpwd",
"glob",
"glob_files",
"grep",
"http_fetch",
"index_source_directory",
"kill_process",
"list_agents",
"list_directory",
"ls",
"mkdir",
"open_editor",
"patch",
"python_exec",
"read_file",
"remove_agent",
"run_command", "run_command",
"run_command_interactive", "run_command_interactive",
"db_set", "search_knowledge",
"db_get", "search_replace",
"db_query", "tail_process",
"http_fetch", "update_knowledge_importance",
"view",
"web_search", "web_search",
"web_search_news", "web_search_news",
"python_exec", "write",
"tail_process", "write_file",
"kill_process",
"apply_patch",
"create_diff",
"create_agent",
"list_agents",
"execute_agent_task",
"remove_agent",
"collaborate_agents",
"add_knowledge_entry",
"get_knowledge_entry",
"search_knowledge",
"get_knowledge_by_category",
"update_knowledge_importance",
"delete_knowledge_entry",
"get_knowledge_statistics",
] ]

View File

@ -1,18 +1,40 @@
import os import os
from typing import Any, Dict, List from typing import Any, Dict, List
from rp.agents.agent_manager import AgentManager
from rp.core.api import call_api
from rp.config import DEFAULT_MODEL, DEFAULT_API_URL
from rp.tools.base import get_tools_definition
from pr.agents.agent_manager import AgentManager
from pr.core.api import call_api def _create_api_wrapper():
"""Create a wrapper function for call_api that matches AgentManager expectations."""
model = os.environ.get("AI_MODEL", DEFAULT_MODEL)
api_url = os.environ.get("API_URL", DEFAULT_API_URL)
api_key = os.environ.get("OPENROUTER_API_KEY", "")
use_tools = int(os.environ.get("USE_TOOLS", "0"))
tools_definition = get_tools_definition() if use_tools else []
def api_wrapper(messages, temperature=None, max_tokens=None, **kwargs):
return call_api(
messages=messages,
model=model,
api_url=api_url,
api_key=api_key,
use_tools=use_tools,
tools_definition=tools_definition,
verbose=False,
)
return api_wrapper
def create_agent(role_name: str, agent_id: str = None) -> Dict[str, Any]: def create_agent(role_name: str, agent_id: str = None) -> Dict[str, Any]:
"""Create a new agent with the specified role.""" """Create a new agent with the specified role."""
try: try:
# Get db_path from environment or default
db_path = os.environ.get("ASSISTANT_DB_PATH", "~/.assistant_db.sqlite") db_path = os.environ.get("ASSISTANT_DB_PATH", "~/.assistant_db.sqlite")
db_path = os.path.expanduser(db_path) db_path = os.path.expanduser(db_path)
api_wrapper = _create_api_wrapper()
manager = AgentManager(db_path, call_api) manager = AgentManager(db_path, api_wrapper)
agent_id = manager.create_agent(role_name, agent_id) agent_id = manager.create_agent(role_name, agent_id)
return {"status": "success", "agent_id": agent_id, "role": role_name} return {"status": "success", "agent_id": agent_id, "role": role_name}
except Exception as e: except Exception as e:
@ -23,7 +45,8 @@ def list_agents() -> Dict[str, Any]:
"""List all active agents.""" """List all active agents."""
try: try:
db_path = os.path.expanduser("~/.assistant_db.sqlite") db_path = os.path.expanduser("~/.assistant_db.sqlite")
manager = AgentManager(db_path, call_api) api_wrapper = _create_api_wrapper()
manager = AgentManager(db_path, api_wrapper)
agents = [] agents = []
for agent_id, agent in manager.active_agents.items(): for agent_id, agent in manager.active_agents.items():
agents.append( agents.append(
@ -43,7 +66,8 @@ def execute_agent_task(agent_id: str, task: str, context: Dict[str, Any] = None)
"""Execute a task with the specified agent.""" """Execute a task with the specified agent."""
try: try:
db_path = os.path.expanduser("~/.assistant_db.sqlite") db_path = os.path.expanduser("~/.assistant_db.sqlite")
manager = AgentManager(db_path, call_api) api_wrapper = _create_api_wrapper()
manager = AgentManager(db_path, api_wrapper)
result = manager.execute_agent_task(agent_id, task, context) result = manager.execute_agent_task(agent_id, task, context)
return result return result
except Exception as e: except Exception as e:
@ -54,7 +78,8 @@ def remove_agent(agent_id: str) -> Dict[str, Any]:
"""Remove an agent.""" """Remove an agent."""
try: try:
db_path = os.path.expanduser("~/.assistant_db.sqlite") db_path = os.path.expanduser("~/.assistant_db.sqlite")
manager = AgentManager(db_path, call_api) api_wrapper = _create_api_wrapper()
manager = AgentManager(db_path, api_wrapper)
success = manager.remove_agent(agent_id) success = manager.remove_agent(agent_id)
return {"status": "success" if success else "not_found", "agent_id": agent_id} return {"status": "success" if success else "not_found", "agent_id": agent_id}
except Exception as e: except Exception as e:
@ -65,7 +90,8 @@ def collaborate_agents(orchestrator_id: str, task: str, agent_roles: List[str])
"""Collaborate multiple agents on a task.""" """Collaborate multiple agents on a task."""
try: try:
db_path = os.path.expanduser("~/.assistant_db.sqlite") db_path = os.path.expanduser("~/.assistant_db.sqlite")
manager = AgentManager(db_path, call_api) api_wrapper = _create_api_wrapper()
manager = AgentManager(db_path, api_wrapper)
result = manager.collaborate_agents(orchestrator_id, task, agent_roles) result = manager.collaborate_agents(orchestrator_id, task, agent_roles)
return result return result
except Exception as e: except Exception as e:

118
rp/tools/base.py Normal file
View File

@ -0,0 +1,118 @@
import inspect
import rp.tools
from typing import get_type_hints, get_origin, get_args
def _type_to_json_schema(py_type):
"""Convert Python type to JSON Schema type."""
if py_type == str:
return {"type": "string"}
elif py_type == int:
return {"type": "integer"}
elif py_type == float:
return {"type": "number"}
elif py_type == bool:
return {"type": "boolean"}
elif get_origin(py_type) == list:
return {"type": "array", "items": _type_to_json_schema(get_args(py_type)[0])}
elif get_origin(py_type) == dict:
return {"type": "object"}
else:
return {"type": "string"}
def _generate_tool_schema(func):
"""Generate JSON Schema for a tool function."""
sig = inspect.signature(func)
docstring = func.__doc__ or ""
description = docstring.strip().split("\n")[0] if docstring else ""
type_hints = get_type_hints(func)
properties = {}
required = []
for param_name, param in sig.parameters.items():
if param_name in ["db_conn", "python_globals"]:
continue
param_type = type_hints.get(param_name, str)
schema = _type_to_json_schema(param_type)
param_doc = ""
if docstring:
lines = docstring.split("\n")
in_args = False
for line in lines:
line = line.strip()
if line.startswith("Args:") or line.startswith("Arguments:"):
in_args = True
continue
elif in_args and line.startswith(param_name + ":"):
param_doc = line.split(":", 1)[1].strip()
break
elif in_args and line == "":
continue
elif in_args and (not line.startswith(" ")):
break
if param_doc:
schema["description"] = param_doc
if param.default != inspect.Parameter.empty:
schema["default"] = param.default
properties[param_name] = schema
if param.default == inspect.Parameter.empty:
required.append(param_name)
return {
"type": "function",
"function": {
"name": func.__name__,
"description": description,
"parameters": {"type": "object", "properties": properties, "required": required},
},
}
def get_tools_definition():
"""Dynamically generate tool definitions from all tool functions."""
tools = []
seen_functions = set()
all_names = getattr(rp.tools, "__all__", [])
for name in all_names:
if name.startswith("_"):
continue
obj = getattr(rp.tools, name, None)
if callable(obj) and hasattr(obj, "__module__") and obj.__module__.startswith("rp.tools."):
# Skip duplicates by checking function identity
if id(obj) in seen_functions:
continue
seen_functions.add(id(obj))
if obj.__doc__:
try:
schema = _generate_tool_schema(obj)
tools.append(schema)
except Exception as e:
print(f"Warning: Could not generate schema for {name}: {e}")
continue
return tools
def get_func_map(db_conn=None, python_globals=None):
"""Dynamically generate function map for tool execution."""
func_map = {}
for name in getattr(rp.tools, "__all__", []):
if name.startswith("_"):
continue
obj = getattr(rp.tools, name, None)
if callable(obj) and hasattr(obj, "__module__") and obj.__module__.startswith("rp.tools."):
sig = inspect.signature(obj)
params = list(sig.parameters.keys())
if "db_conn" in params and "python_globals" in params:
func_map[name] = (
lambda func=obj, db_conn=db_conn, python_globals=python_globals, **kw: func(
**kw, db_conn=db_conn, python_globals=python_globals
)
)
elif "db_conn" in params:
func_map[name] = lambda func=obj, db_conn=db_conn, **kw: func(**kw, db_conn=db_conn)
elif "python_globals" in params:
func_map[name] = lambda func=obj, python_globals=python_globals, **kw: func(
**kw, python_globals=python_globals
)
else:
func_map[name] = lambda func=obj, **kw: func(**kw)
return func_map

View File

@ -1,10 +1,7 @@
import os
import select import select
import subprocess import subprocess
import time import time
from pr.multiplexer import close_multiplexer, create_multiplexer, get_multiplexer
_processes = {} _processes = {}
@ -23,11 +20,9 @@ def kill_process(pid: int):
if process: if process:
process.kill() process.kill()
_processes.pop(pid) _processes.pop(pid)
mux_name = f"cmd-{pid}" mux_name = f"cmd-{pid}"
if get_multiplexer(mux_name): if get_multiplexer(mux_name):
close_multiplexer(mux_name) close_multiplexer(mux_name)
return {"status": "success", "message": f"Process {pid} has been killed"} return {"status": "success", "message": f"Process {pid} has been killed"}
else: else:
return {"status": "error", "error": f"Process {pid} not found"} return {"status": "error", "error": f"Process {pid} not found"}
@ -40,16 +35,13 @@ def tail_process(pid: int, timeout: int = 30):
if process: if process:
mux_name = f"cmd-{pid}" mux_name = f"cmd-{pid}"
mux = get_multiplexer(mux_name) mux = get_multiplexer(mux_name)
if not mux: if not mux:
mux_name, mux = create_multiplexer(mux_name, show_output=True) mux_name, mux = create_multiplexer(mux_name, show_output=True)
try: try:
start_time = time.time() start_time = time.time()
timeout_duration = timeout timeout_duration = timeout
stdout_content = "" stdout_content = ""
stderr_content = "" stderr_content = ""
while True: while True:
if process.poll() is not None: if process.poll() is not None:
remaining_stdout, remaining_stderr = process.communicate() remaining_stdout, remaining_stderr = process.communicate()
@ -59,19 +51,15 @@ def tail_process(pid: int, timeout: int = 30):
if remaining_stderr: if remaining_stderr:
mux.write_stderr(remaining_stderr) mux.write_stderr(remaining_stderr)
stderr_content += remaining_stderr stderr_content += remaining_stderr
if pid in _processes: if pid in _processes:
_processes.pop(pid) _processes.pop(pid)
close_multiplexer(mux_name) close_multiplexer(mux_name)
return { return {
"status": "success", "status": "success",
"stdout": stdout_content, "stdout": stdout_content,
"stderr": stderr_content, "stderr": stderr_content,
"returncode": process.returncode, "returncode": process.returncode,
} }
if time.time() - start_time > timeout_duration: if time.time() - start_time > timeout_duration:
return { return {
"status": "running", "status": "running",
@ -80,7 +68,6 @@ def tail_process(pid: int, timeout: int = 30):
"stderr_so_far": stderr_content, "stderr_so_far": stderr_content,
"pid": pid, "pid": pid,
} }
ready, _, _ = select.select([process.stdout, process.stderr], [], [], 0.1) ready, _, _ = select.select([process.stdout, process.stderr], [], [], 0.1)
for pipe in ready: for pipe in ready:
if pipe == process.stdout: if pipe == process.stdout:
@ -99,25 +86,31 @@ def tail_process(pid: int, timeout: int = 30):
return {"status": "error", "error": f"Process {pid} not found"} return {"status": "error", "error": f"Process {pid} not found"}
def run_command(command, timeout=30, monitored=False): def run_command(command, timeout=30, monitored=False, cwd=None):
"""Execute a shell command and return the output.
Args:
command: The shell command to execute.
timeout: Maximum time in seconds to wait for completion.
monitored: Whether to monitor the process (unused).
cwd: Working directory for the command.
Returns:
Dict with status, stdout, stderr, returncode, and optionally pid if still running.
"""
from rp.multiplexer import close_multiplexer, create_multiplexer
mux_name = None mux_name = None
try: try:
process = subprocess.Popen( process = subprocess.Popen(
command, command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, cwd=cwd
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
) )
_register_process(process.pid, process) _register_process(process.pid, process)
mux_name, mux = create_multiplexer(f"cmd-{process.pid}", show_output=True) mux_name, mux = create_multiplexer(f"cmd-{process.pid}", show_output=True)
start_time = time.time() start_time = time.time()
timeout_duration = timeout timeout_duration = timeout
stdout_content = "" stdout_content = ""
stderr_content = "" stderr_content = ""
while True: while True:
if process.poll() is not None: if process.poll() is not None:
remaining_stdout, remaining_stderr = process.communicate() remaining_stdout, remaining_stderr = process.communicate()
@ -127,19 +120,15 @@ def run_command(command, timeout=30, monitored=False):
if remaining_stderr: if remaining_stderr:
mux.write_stderr(remaining_stderr) mux.write_stderr(remaining_stderr)
stderr_content += remaining_stderr stderr_content += remaining_stderr
if process.pid in _processes: if process.pid in _processes:
_processes.pop(process.pid) _processes.pop(process.pid)
close_multiplexer(mux_name) close_multiplexer(mux_name)
return { return {
"status": "success", "status": "success",
"stdout": stdout_content, "stdout": stdout_content,
"stderr": stderr_content, "stderr": stderr_content,
"returncode": process.returncode, "returncode": process.returncode,
} }
if time.time() - start_time > timeout_duration: if time.time() - start_time > timeout_duration:
return { return {
"status": "running", "status": "running",
@ -149,7 +138,6 @@ def run_command(command, timeout=30, monitored=False):
"pid": process.pid, "pid": process.pid,
"mux_name": mux_name, "mux_name": mux_name,
} }
ready, _, _ = select.select([process.stdout, process.stderr], [], [], 0.1) ready, _, _ = select.select([process.stdout, process.stderr], [], [], 0.1)
for pipe in ready: for pipe in ready:
if pipe == process.stdout: if pipe == process.stdout:

176
rp/tools/command.py.bak Normal file
View File

@ -0,0 +1,176 @@
print(f"Executing command: {command}") print(f"Killing process: {pid}")import os
import select
import subprocess
import time
from rp.multiplexer import close_multiplexer, create_multiplexer, get_multiplexer
_processes = {}
def _register_process(pid: int, process):
_processes[pid] = process
return _processes
def _get_process(pid: int):
return _processes.get(pid)
def kill_process(pid: int):
try:
process = _get_process(pid)
if process:
process.kill()
_processes.pop(pid)
mux_name = f"cmd-{pid}"
if get_multiplexer(mux_name):
close_multiplexer(mux_name)
return {"status": "success", "message": f"Process {pid} has been killed"}
else:
return {"status": "error", "error": f"Process {pid} not found"}
except Exception as e:
return {"status": "error", "error": str(e)}
def tail_process(pid: int, timeout: int = 30):
process = _get_process(pid)
if process:
mux_name = f"cmd-{pid}"
mux = get_multiplexer(mux_name)
if not mux:
mux_name, mux = create_multiplexer(mux_name, show_output=True)
try:
start_time = time.time()
timeout_duration = timeout
stdout_content = ""
stderr_content = ""
while True:
if process.poll() is not None:
remaining_stdout, remaining_stderr = process.communicate()
if remaining_stdout:
mux.write_stdout(remaining_stdout)
stdout_content += remaining_stdout
if remaining_stderr:
mux.write_stderr(remaining_stderr)
stderr_content += remaining_stderr
if pid in _processes:
_processes.pop(pid)
close_multiplexer(mux_name)
return {
"status": "success",
"stdout": stdout_content,
"stderr": stderr_content,
"returncode": process.returncode,
}
if time.time() - start_time > timeout_duration:
return {
"status": "running",
"message": "Process is still running. Call tail_process again to continue monitoring.",
"stdout_so_far": stdout_content,
"stderr_so_far": stderr_content,
"pid": pid,
}
ready, _, _ = select.select([process.stdout, process.stderr], [], [], 0.1)
for pipe in ready:
if pipe == process.stdout:
line = process.stdout.readline()
if line:
mux.write_stdout(line)
stdout_content += line
elif pipe == process.stderr:
line = process.stderr.readline()
if line:
mux.write_stderr(line)
stderr_content += line
except Exception as e:
return {"status": "error", "error": str(e)}
else:
return {"status": "error", "error": f"Process {pid} not found"}
def run_command(command, timeout=30, monitored=False):
mux_name = None
try:
process = subprocess.Popen(
command,
shell=True,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
)
_register_process(process.pid, process)
mux_name, mux = create_multiplexer(f"cmd-{process.pid}", show_output=True)
start_time = time.time()
timeout_duration = timeout
stdout_content = ""
stderr_content = ""
while True:
if process.poll() is not None:
remaining_stdout, remaining_stderr = process.communicate()
if remaining_stdout:
mux.write_stdout(remaining_stdout)
stdout_content += remaining_stdout
if remaining_stderr:
mux.write_stderr(remaining_stderr)
stderr_content += remaining_stderr
if process.pid in _processes:
_processes.pop(process.pid)
close_multiplexer(mux_name)
return {
"status": "success",
"stdout": stdout_content,
"stderr": stderr_content,
"returncode": process.returncode,
}
if time.time() - start_time > timeout_duration:
return {
"status": "running",
"message": f"Process still running after {timeout}s timeout. Use tail_process({process.pid}) to monitor or kill_process({process.pid}) to terminate.",
"stdout_so_far": stdout_content,
"stderr_so_far": stderr_content,
"pid": process.pid,
"mux_name": mux_name,
}
ready, _, _ = select.select([process.stdout, process.stderr], [], [], 0.1)
for pipe in ready:
if pipe == process.stdout:
line = process.stdout.readline()
if line:
mux.write_stdout(line)
stdout_content += line
elif pipe == process.stderr:
line = process.stderr.readline()
if line:
mux.write_stderr(line)
stderr_content += line
except Exception as e:
if mux_name:
close_multiplexer(mux_name)
return {"status": "error", "error": str(e)}
def run_command_interactive(command):
try:
return_code = os.system(command)
return {"status": "success", "returncode": return_code}
except Exception as e:
return {"status": "error", "error": str(e)}

View File

@ -0,0 +1,70 @@
import os
from typing import Optional
CONTEXT_FILE = "/home/retoor/.local/share/rp/.rcontext.txt"
def _read_context() -> str:
if not os.path.exists(CONTEXT_FILE):
raise FileNotFoundError(f"Context file {CONTEXT_FILE} not found.")
with open(CONTEXT_FILE, "r") as f:
return f.read()
def _write_context(content: str):
with open(CONTEXT_FILE, "w") as f:
f.write(content)
def modify_context_add(new_content: str, position: Optional[str] = None) -> str:
"""
Add new content to the .rcontext.txt file.
Args:
new_content: The content to add.
position: Optional marker to insert before (e.g., '***').
"""
current = _read_context()
if position and position in current:
parts = current.split(position, 1)
updated = parts[0] + new_content + "\n\n" + position + parts[1]
else:
updated = current + "\n\n" + new_content
_write_context(updated)
return f"Added: {new_content[:100]}... (full addition applied). Consequences: Enhances functionality as requested."
def modify_context_replace(old_content: str, new_content: str) -> str:
"""
Replace old content with new content in .rcontext.txt.
Args:
old_content: The content to replace.
new_content: The replacement content.
"""
current = _read_context()
if old_content not in current:
raise ValueError(f"Old content not found: {old_content[:50]}...")
updated = current.replace(old_content, new_content, 1)
_write_context(updated)
return f"Replaced: '{old_content[:50]}...' with '{new_content[:50]}...'. Consequences: Changes behavior as specified; verify for unintended effects."
def modify_context_delete(content_to_delete: str, confirmed: bool = False) -> str:
"""
Delete content from .rcontext.txt, but only if confirmed.
Args:
content_to_delete: The content to delete.
confirmed: Must be True to proceed with deletion.
"""
if not confirmed:
raise PermissionError(
f"Deletion not confirmed. To delete '{content_to_delete[:50]}...', you must explicitly confirm. Are you sure? This may affect system behavior permanently."
)
current = _read_context()
if content_to_delete not in current:
raise ValueError(f"Content to delete not found: {content_to_delete[:50]}...")
updated = current.replace(content_to_delete, "", 1)
_write_context(updated)
return f"Deleted: '{content_to_delete[:50]}...'. Consequences: Removed specified content; system may lose referenced rules or guidelines."

View File

@ -2,14 +2,22 @@ import time
def db_set(key, value, db_conn): def db_set(key, value, db_conn):
"""Set a key-value pair in the database.
Args:
key: The key to set.
value: The value to store.
db_conn: Database connection.
Returns:
Dict with status and message.
"""
if not db_conn: if not db_conn:
return {"status": "error", "error": "Database not initialized"} return {"status": "error", "error": "Database not initialized"}
try: try:
cursor = db_conn.cursor() cursor = db_conn.cursor()
cursor.execute( cursor.execute(
"""INSERT OR REPLACE INTO kv_store (key, value, timestamp) "INSERT OR REPLACE INTO kv_store (key, value, timestamp)\n VALUES (?, ?, ?)",
VALUES (?, ?, ?)""",
(key, value, time.time()), (key, value, time.time()),
) )
db_conn.commit() db_conn.commit()
@ -19,9 +27,17 @@ def db_set(key, value, db_conn):
def db_get(key, db_conn): def db_get(key, db_conn):
"""Get a value from the database.
Args:
key: The key to retrieve.
db_conn: Database connection.
Returns:
Dict with status and value.
"""
if not db_conn: if not db_conn:
return {"status": "error", "error": "Database not initialized"} return {"status": "error", "error": "Database not initialized"}
try: try:
cursor = db_conn.cursor() cursor = db_conn.cursor()
cursor.execute("SELECT value FROM kv_store WHERE key = ?", (key,)) cursor.execute("SELECT value FROM kv_store WHERE key = ?", (key,))
@ -35,13 +51,20 @@ def db_get(key, db_conn):
def db_query(query, db_conn): def db_query(query, db_conn):
"""Execute a database query.
Args:
query: SQL query to execute.
db_conn: Database connection.
Returns:
Dict with status and query results.
"""
if not db_conn: if not db_conn:
return {"status": "error", "error": "Database not initialized"} return {"status": "error", "error": "Database not initialized"}
try: try:
cursor = db_conn.cursor() cursor = db_conn.cursor()
cursor.execute(query) cursor.execute(query)
if query.strip().upper().startswith("SELECT"): if query.strip().upper().startswith("SELECT"):
results = cursor.fetchall() results = cursor.fetchall()
columns = [desc[0] for desc in cursor.description] if cursor.description else [] columns = [desc[0] for desc in cursor.description] if cursor.description else []

795
rp/tools/debugging.py Normal file
View File

@ -0,0 +1,795 @@
import sys
import os
import ast
import inspect
import time
import threading
import gc
import weakref
import linecache
import re
import json
import subprocess
from collections import defaultdict
from datetime import datetime
class MemoryTracker:
def __init__(self):
self.allocations = defaultdict(list)
self.references = weakref.WeakValueDictionary()
self.peak_memory = 0
self.current_memory = 0
def track_object(self, obj, location):
try:
obj_id = id(obj)
obj_size = sys.getsizeof(obj)
self.allocations[location].append(
{
"id": obj_id,
"type": type(obj).__name__,
"size": obj_size,
"timestamp": time.time(),
}
)
self.current_memory += obj_size
if self.current_memory > self.peak_memory:
self.peak_memory = self.current_memory
except:
pass
def analyze_leaks(self):
gc.collect()
leaks = []
for obj in gc.get_objects():
if sys.getrefcount(obj) > 10:
try:
leaks.append(
{
"type": type(obj).__name__,
"refcount": sys.getrefcount(obj),
"size": sys.getsizeof(obj),
}
)
except:
pass
return sorted(leaks, key=lambda x: x["refcount"], reverse=True)[:20]
def get_report(self):
return {
"peak_memory": self.peak_memory,
"current_memory": self.current_memory,
"allocation_count": sum((len(v) for v in self.allocations.values())),
"leaks": self.analyze_leaks(),
}
class PerformanceProfiler:
def __init__(self):
self.function_times = defaultdict(lambda: {"calls": 0, "total_time": 0.0, "self_time": 0.0})
self.call_stack = []
self.start_times = {}
def enter_function(self, frame):
func_name = self._get_function_name(frame)
self.call_stack.append(func_name)
self.start_times[id(frame)] = time.perf_counter()
def exit_function(self, frame):
func_name = self._get_function_name(frame)
frame_id = id(frame)
if frame_id in self.start_times:
elapsed = time.perf_counter() - self.start_times[frame_id]
self.function_times[func_name]["calls"] += 1
self.function_times[func_name]["total_time"] += elapsed
self.function_times[func_name]["self_time"] += elapsed
del self.start_times[frame_id]
if self.call_stack:
self.call_stack.pop()
def _get_function_name(self, frame):
return f"{frame.f_code.co_filename}:{frame.f_code.co_name}:{frame.f_lineno}"
def get_hotspots(self, limit=20):
sorted_funcs = sorted(
self.function_times.items(), key=lambda x: x[1]["total_time"], reverse=True
)
return sorted_funcs[:limit]
class StaticAnalyzer(ast.NodeVisitor):
def __init__(self):
self.issues = []
self.complexity = 0
self.unused_vars = set()
self.undefined_vars = set()
self.defined_vars = set()
self.used_vars = set()
self.functions = {}
self.classes = {}
self.imports = []
def visit_FunctionDef(self, node):
self.functions[node.name] = {
"lineno": node.lineno,
"args": [arg.arg for arg in node.args.args],
"decorators": [
d.id if isinstance(d, ast.Name) else "complex" for d in node.decorator_list
],
"complexity": self._calculate_complexity(node),
}
if len(node.body) == 0:
self.issues.append(f"Line {node.lineno}: Empty function '{node.name}'")
if len(node.args.args) > 7:
self.issues.append(
f"Line {node.lineno}: Function '{node.name}' has too many parameters ({len(node.args.args)})"
)
self.generic_visit(node)
def visit_ClassDef(self, node):
self.classes[node.name] = {
"lineno": node.lineno,
"bases": [b.id if isinstance(b, ast.Name) else "complex" for b in node.bases],
"methods": [],
}
self.generic_visit(node)
def visit_Import(self, node):
for alias in node.names:
self.imports.append(alias.name)
self.generic_visit(node)
def visit_ImportFrom(self, node):
if node.module:
self.imports.append(node.module)
self.generic_visit(node)
def visit_Name(self, node):
if isinstance(node.ctx, ast.Store):
self.defined_vars.add(node.id)
elif isinstance(node.ctx, ast.Load):
self.used_vars.add(node.id)
self.generic_visit(node)
def visit_If(self, node):
self.complexity += 1
self.generic_visit(node)
def visit_For(self, node):
self.complexity += 1
self.generic_visit(node)
def visit_While(self, node):
self.complexity += 1
self.generic_visit(node)
def visit_ExceptHandler(self, node):
self.complexity += 1
if node.type is None:
self.issues.append(f"Line {node.lineno}: Bare except clause (catches all exceptions)")
self.generic_visit(node)
def visit_BinOp(self, node):
if isinstance(node.op, ast.Add):
if isinstance(node.left, ast.Str) or isinstance(node.right, ast.Str):
self.issues.append(
f"Line {node.lineno}: String concatenation with '+' (use f-strings or join)"
)
self.generic_visit(node)
def _calculate_complexity(self, node):
complexity = 1
for child in ast.walk(node):
if isinstance(child, (ast.If, ast.For, ast.While, ast.ExceptHandler)):
complexity += 1
return complexity
def finalize(self):
self.unused_vars = self.defined_vars - self.used_vars
self.undefined_vars = self.used_vars - self.defined_vars - set(dir(__builtins__))
for var in self.unused_vars:
self.issues.append(f"Unused variable: '{var}'")
def analyze_code(self, source_code):
try:
tree = ast.parse(source_code)
self.visit(tree)
self.finalize()
return {
"issues": self.issues,
"complexity": self.complexity,
"functions": self.functions,
"classes": self.classes,
"imports": self.imports,
"unused_vars": list(self.unused_vars),
}
except SyntaxError as e:
return {"error": f"Syntax error at line {e.lineno}: {e.msg}"}
class DynamicTracer:
def __init__(self):
self.execution_trace = []
self.exception_trace = []
self.variable_changes = defaultdict(list)
self.line_coverage = set()
self.function_calls = defaultdict(int)
self.max_trace_length = 10000
self.memory_tracker = MemoryTracker()
self.profiler = PerformanceProfiler()
self.trace_active = False
def trace_calls(self, frame, event, arg):
if not self.trace_active:
return
if len(self.execution_trace) >= self.max_trace_length:
return self.trace_calls
co = frame.f_code
func_name = co.co_name
filename = co.co_filename
line_no = frame.f_lineno
if "site-packages" in filename or filename.startswith("<"):
return self.trace_calls
trace_entry = {
"event": event,
"function": func_name,
"filename": filename,
"lineno": line_no,
"timestamp": time.time(),
}
if event == "call":
self.function_calls[f"{filename}:{func_name}"] += 1
self.profiler.enter_function(frame)
trace_entry["locals"] = {
k: repr(v)[:100] for k, v in frame.f_locals.items() if not k.startswith("__")
}
elif event == "return":
self.profiler.exit_function(frame)
trace_entry["return_value"] = repr(arg)[:100] if arg else None
elif event == "line":
self.line_coverage.add((filename, line_no))
line_code = linecache.getline(filename, line_no).strip()
trace_entry["code"] = line_code
for var, value in frame.f_locals.items():
if not var.startswith("__"):
self.variable_changes[var].append(
{"line": line_no, "value": repr(value)[:100], "timestamp": time.time()}
)
self.memory_tracker.track_object(value, f"{filename}:{line_no}")
elif event == "exception":
exc_type, exc_value, exc_tb = arg
self.exception_trace.append(
{
"type": exc_type.__name__,
"message": str(exc_value),
"filename": filename,
"function": func_name,
"lineno": line_no,
"timestamp": time.time(),
}
)
trace_entry["exception"] = {"type": exc_type.__name__, "message": str(exc_value)}
self.execution_trace.append(trace_entry)
return self.trace_calls
def start_tracing(self):
self.trace_active = True
sys.settrace(self.trace_calls)
threading.settrace(self.trace_calls)
def stop_tracing(self):
self.trace_active = False
sys.settrace(None)
threading.settrace(None)
def get_trace_report(self):
return {
"execution_trace": self.execution_trace[-100:],
"exception_trace": self.exception_trace,
"line_coverage": list(self.line_coverage),
"function_calls": dict(self.function_calls),
"variable_changes": {k: v[-10:] for k, v in self.variable_changes.items()},
"hotspots": self.profiler.get_hotspots(20),
"memory_report": self.memory_tracker.get_report(),
}
class GitBisectAutomator:
def __init__(self, repo_path="."):
self.repo_path = repo_path
def is_git_repo(self):
try:
result = subprocess.run(
["git", "rev-parse", "--git-dir"],
cwd=self.repo_path,
capture_output=True,
text=True,
)
return result.returncode == 0
except:
return False
def get_commit_history(self, limit=50):
try:
result = subprocess.run(
["git", "log", f"--max-count={limit}", "--oneline"],
cwd=self.repo_path,
capture_output=True,
text=True,
)
if result.returncode == 0:
commits = []
for line in result.stdout.strip().split("\n"):
parts = line.split(" ", 1)
if len(parts) == 2:
commits.append({"hash": parts[0], "message": parts[1]})
return commits
except:
pass
return []
def blame_file(self, filepath):
try:
result = subprocess.run(
["git", "blame", "-L", "1,50", filepath],
cwd=self.repo_path,
capture_output=True,
text=True,
)
if result.returncode == 0:
return result.stdout
except:
pass
return None
class LogAnalyzer:
def __init__(self):
self.log_patterns = {
"error": re.compile("error|exception|fail|critical", re.IGNORECASE),
"warning": re.compile("warn|caution", re.IGNORECASE),
"debug": re.compile("debug|trace", re.IGNORECASE),
"timestamp": re.compile(
"\\\\d{4}-\\\\d{2}-\\\\d{2}[\\\\s_T]\\\\d{2}:\\\\d{2}:\\\\d{2}"
),
}
self.anomalies = []
def analyze_logs(self, log_content):
lines = log_content.split("\n")
errors = []
warnings = []
timestamps = []
for i, line in enumerate(lines):
if self.log_patterns["error"].search(line):
errors.append({"line": i + 1, "content": line})
elif self.log_patterns["warning"].search(line):
warnings.append({"line": i + 1, "content": line})
ts_match = self.log_patterns["timestamp"].search(line)
if ts_match:
timestamps.append(ts_match.group())
return {
"total_lines": len(lines),
"errors": errors[:50],
"warnings": warnings[:50],
"error_count": len(errors),
"warning_count": len(warnings),
"timestamps": timestamps[:20],
}
class ExceptionAnalyzer:
def __init__(self):
self.exceptions = []
self.exception_counts = defaultdict(int)
def capture_exception(self, exc_type, exc_value, exc_traceback):
exc_info = {
"type": exc_type.__name__,
"message": str(exc_value),
"timestamp": time.time(),
"traceback": [],
}
tb = exc_traceback
while tb is not None:
frame = tb.tb_frame
exc_info["traceback"].append(
{
"filename": frame.f_code.co_filename,
"function": frame.f_code.co_name,
"lineno": tb.tb_lineno,
"locals": {
k: repr(v)[:100]
for k, v in frame.f_locals.items()
if not k.startswith("__")
},
}
)
tb = tb.tb_next
self.exceptions.append(exc_info)
self.exception_counts[exc_type.__name__] += 1
return exc_info
def get_report(self):
return {
"total_exceptions": len(self.exceptions),
"exception_types": dict(self.exception_counts),
"recent_exceptions": self.exceptions[-10:],
}
class TestGenerator:
def __init__(self):
self.test_cases = []
def generate_tests_for_function(self, func_name, func_signature):
test_template = f"def test_{func_name}_basic():\n result = {func_name}()\n assert result is not None\n\ndef test_{func_name}_edge_cases():\n pass\n\ndef test_{func_name}_exceptions():\n pass\n"
return test_template
def analyze_function_for_tests(self, func_obj):
sig = inspect.signature(func_obj)
test_inputs = []
for param_name, param in sig.parameters.items():
if param.annotation != inspect.Parameter.empty:
param_type = param.annotation
if param_type == int:
test_inputs.append([0, 1, -1, 100])
elif param_type == str:
test_inputs.append(["", "test", "a" * 100])
elif param_type == list:
test_inputs.append([[], [1], [1, 2, 3]])
else:
test_inputs.append([None])
else:
test_inputs.append([None, 0, "", []])
return test_inputs
class CodeFlowVisualizer:
def __init__(self):
self.flow_graph = defaultdict(list)
self.call_hierarchy = defaultdict(set)
def build_flow_from_trace(self, execution_trace):
for i in range(len(execution_trace) - 1):
current = execution_trace[i]
next_step = execution_trace[i + 1]
current_node = f"{current['function']}:{current['lineno']}"
next_node = f"{next_step['function']}:{next_step['lineno']}"
self.flow_graph[current_node].append(next_node)
if current["event"] == "call":
caller = current["function"]
callee = next_step["function"]
self.call_hierarchy[caller].add(callee)
def generate_text_visualization(self):
output = []
output.append("Call Hierarchy:")
for caller, callees in sorted(self.call_hierarchy.items()):
output.append(f" {caller}")
for callee in sorted(callees):
output.append(f" -> {callee}")
return "\n".join(output)
class AutomatedDebugger:
def __init__(self):
self.static_analyzer = StaticAnalyzer()
self.dynamic_tracer = DynamicTracer()
self.exception_analyzer = ExceptionAnalyzer()
self.log_analyzer = LogAnalyzer()
self.git_automator = GitBisectAutomator()
self.test_generator = TestGenerator()
self.flow_visualizer = CodeFlowVisualizer()
self.original_excepthook = sys.excepthook
def analyze_source_file(self, filepath):
try:
with open(filepath, "r", encoding="utf-8") as f:
source_code = f.read()
return self.static_analyzer.analyze_code(source_code)
except Exception as e:
return {"error": str(e)}
def run_with_tracing(self, target_func, *args, **kwargs):
self.dynamic_tracer.start_tracing()
result = None
exception = None
try:
result = target_func(*args, **kwargs)
except Exception as e:
exception = self.exception_analyzer.capture_exception(type(e), e, e.__traceback__)
finally:
self.dynamic_tracer.stop_tracing()
self.flow_visualizer.build_flow_from_trace(self.dynamic_tracer.execution_trace)
return {
"result": result,
"exception": exception,
"trace": self.dynamic_tracer.get_trace_report(),
"flow": self.flow_visualizer.generate_text_visualization(),
}
def analyze_logs(self, log_file_or_content):
if os.path.isfile(log_file_or_content):
with open(log_file_or_content, "r", encoding="utf-8") as f:
content = f.read()
else:
content = log_file_or_content
return self.log_analyzer.analyze_logs(content)
def generate_debug_report(self, output_file="debug_report.json"):
report = {
"timestamp": datetime.now().isoformat(),
"static_analysis": self.static_analyzer.issues,
"dynamic_trace": self.dynamic_tracer.get_trace_report(),
"exceptions": self.exception_analyzer.get_report(),
"git_info": (
self.git_automator.get_commit_history(10)
if self.git_automator.is_git_repo()
else None
),
"flow_visualization": self.flow_visualizer.generate_text_visualization(),
}
with open(output_file, "w") as f:
json.dump(report, f, indent=2, default=str)
return report
def auto_debug_function(self, func, test_inputs=None):
results = []
if test_inputs is None:
test_inputs = self.test_generator.analyze_function_for_tests(func)
for input_set in test_inputs:
try:
if isinstance(input_set, list):
result = self.run_with_tracing(func, *input_set)
else:
result = self.run_with_tracing(func, input_set)
results.append(
{
"input": input_set,
"success": result["exception"] is None,
"output": result["result"],
"trace_summary": {
"function_calls": len(result["trace"]["function_calls"]),
"exceptions": len(result["trace"]["exception_trace"]),
},
}
)
except Exception as e:
results.append({"input": input_set, "success": False, "error": str(e)})
return results
_memory_tracker = MemoryTracker()
_performance_profiler = PerformanceProfiler()
_static_analyzer = StaticAnalyzer()
_dynamic_tracer = DynamicTracer()
_git_automator = GitBisectAutomator()
_log_analyzer = LogAnalyzer()
_exception_analyzer = ExceptionAnalyzer()
_test_generator = TestGenerator()
_code_flow_visualizer = CodeFlowVisualizer()
_automated_debugger = AutomatedDebugger()
def track_memory_allocation(location: str = "manual") -> dict:
"""Track current memory allocation at a specific location."""
try:
_memory_tracker.track_object({}, location)
return {
"status": "success",
"current_memory": _memory_tracker.current_memory,
"peak_memory": _memory_tracker.peak_memory,
"location": location,
}
except Exception as e:
return {"status": "error", "error": str(e)}
def analyze_memory_leaks() -> dict:
"""Analyze potential memory leaks in the current process."""
try:
leaks = _memory_tracker.analyze_leaks()
return {"status": "success", "leaks_found": len(leaks), "top_leaks": leaks[:10]}
except Exception as e:
return {"status": "error", "error": str(e)}
def get_memory_report() -> dict:
"""Get a comprehensive memory usage report."""
try:
return {"status": "success", "report": _memory_tracker.get_report()}
except Exception as e:
return {"status": "error", "error": str(e)}
def start_performance_profiling() -> dict:
"""Start performance profiling."""
try:
PerformanceProfiler()
return {"status": "success", "message": "Performance profiling started"}
except Exception as e:
return {"status": "error", "error": str(e)}
def stop_performance_profiling() -> dict:
"""Stop performance profiling and get hotspots."""
try:
hotspots = _performance_profiler.get_hotspots(20)
return {"status": "success", "hotspots": hotspots}
except Exception as e:
return {"status": "error", "error": str(e)}
def analyze_source_code(source_code: str) -> dict:
"""Perform static analysis on Python source code."""
try:
analyzer = StaticAnalyzer()
result = analyzer.analyze_code(source_code)
return {"status": "success", "analysis": result}
except Exception as e:
return {"status": "error", "error": str(e)}
def analyze_source_file(filepath: str) -> dict:
"""Analyze a Python source file statically."""
try:
result = _automated_debugger.analyze_source_file(filepath)
return {"status": "success", "filepath": filepath, "analysis": result}
except Exception as e:
return {"status": "error", "error": str(e)}
def start_dynamic_tracing() -> dict:
"""Start dynamic execution tracing."""
try:
_dynamic_tracer.start_tracing()
return {"status": "success", "message": "Dynamic tracing started"}
except Exception as e:
return {"status": "error", "error": str(e)}
def stop_dynamic_tracing() -> dict:
"""Stop dynamic tracing and get trace report."""
try:
_dynamic_tracer.stop_tracing()
report = _dynamic_tracer.get_trace_report()
return {"status": "success", "trace_report": report}
except Exception as e:
return {"status": "error", "error": str(e)}
def get_git_commit_history(limit: int = 50) -> dict:
"""Get recent git commit history."""
try:
commits = _git_automator.get_commit_history(limit)
return {
"status": "success",
"commits": commits,
"is_git_repo": _git_automator.is_git_repo(),
}
except Exception as e:
return {"status": "error", "error": str(e)}
def blame_file(filepath: str) -> dict:
"""Get git blame information for a file."""
try:
blame_output = _git_automator.blame_file(filepath)
return {"status": "success", "filepath": filepath, "blame": blame_output}
except Exception as e:
return {"status": "error", "error": str(e)}
def analyze_log_content(log_content: str) -> dict:
"""Analyze log content for errors, warnings, and patterns."""
try:
analysis = _log_analyzer.analyze_logs(log_content)
return {"status": "success", "analysis": analysis}
except Exception as e:
return {"status": "error", "error": str(e)}
def analyze_log_file(filepath: str) -> dict:
"""Analyze a log file."""
try:
analysis = _automated_debugger.analyze_logs(filepath)
return {"status": "success", "filepath": filepath, "analysis": analysis}
except Exception as e:
return {"status": "error", "error": str(e)}
def get_exception_report() -> dict:
"""Get a report of captured exceptions."""
try:
report = _exception_analyzer.get_report()
return {"status": "success", "exception_report": report}
except Exception as e:
return {"status": "error", "error": str(e)}
def generate_tests_for_function(func_name: str, func_signature: str = "") -> dict:
"""Generate test templates for a function."""
try:
test_code = _test_generator.generate_tests_for_function(func_name, func_signature)
return {"status": "success", "func_name": func_name, "test_code": test_code}
except Exception as e:
return {"status": "error", "error": str(e)}
def visualize_code_flow_from_trace(execution_trace) -> dict:
"""Visualize code flow from execution trace."""
try:
visualizer = CodeFlowVisualizer()
visualizer.build_flow_from_trace(execution_trace)
visualization = visualizer.generate_text_visualization()
return {"status": "success", "flow_visualization": visualization}
except Exception as e:
return {"status": "error", "error": str(e)}
def run_function_with_debugging(func_code: str, *args, **kwargs) -> dict:
"""Execute a function with full debugging."""
try:
local_vars = {}
exec(func_code, globals(), local_vars)
func = None
for name, obj in local_vars.items():
if callable(obj) and (not name.startswith("_")):
func = obj
break
if func is None:
return {"status": "error", "error": "No function found in code"}
result = _automated_debugger.run_with_tracing(func, *args, **kwargs)
return {"status": "success", "debug_result": result}
except Exception as e:
return {"status": "error", "error": str(e)}
def generate_comprehensive_debug_report(output_file: str = "debug_report.json") -> dict:
"""Generate a comprehensive debug report."""
try:
report = _automated_debugger.generate_debug_report(output_file)
return {
"status": "success",
"output_file": output_file,
"report_summary": {
"static_issues": len(report.get("static_analysis", [])),
"exceptions": report.get("exceptions", {}).get("total_exceptions", 0),
"function_calls": len(report.get("dynamic_trace", {}).get("function_calls", {})),
},
}
except Exception as e:
return {"status": "error", "error": str(e)}
def auto_debug_function(func_code: str, test_inputs: list = None) -> dict:
"""Automatically debug a function with test inputs."""
try:
local_vars = {}
exec(func_code, globals(), local_vars)
func = None
for name, obj in local_vars.items():
if callable(obj) and (not name.startswith("_")):
func = obj
break
if func is None:
return {"status": "error", "error": "No function found in code"}
if test_inputs is None:
test_inputs = _test_generator.analyze_function_for_tests(func)
results = _automated_debugger.auto_debug_function(func, test_inputs)
return {"status": "success", "debug_results": results}
except Exception as e:
return {"status": "error", "error": str(e)}

View File

@ -1,9 +1,6 @@
import os import os
import os.path import os.path
from rp.editor import RPEditor
from pr.editor import RPEditor
from pr.multiplexer import close_multiplexer, create_multiplexer, get_multiplexer
from ..tools.patch import display_content_diff from ..tools.patch import display_content_diff
from ..ui.edit_feedback import track_edit, tracker from ..ui.edit_feedback import track_edit, tracker
@ -17,74 +14,66 @@ def get_editor(filepath):
def close_editor(filepath): def close_editor(filepath):
from rp.multiplexer import close_multiplexer, get_multiplexer
try: try:
path = os.path.expanduser(filepath) path = os.path.expanduser(filepath)
editor = get_editor(path) editor = get_editor(path)
editor.close() editor.close()
mux_name = f"editor-{path}" mux_name = f"editor-{path}"
mux = get_multiplexer(mux_name) mux = get_multiplexer(mux_name)
if mux: if mux:
mux.write_stdout(f"Closed editor for: {path}\n") mux.write_stdout(f"Closed editor for: {path}\n")
close_multiplexer(mux_name) close_multiplexer(mux_name)
return {"status": "success", "message": f"Editor closed for {path}"} return {"status": "success", "message": f"Editor closed for {path}"}
except Exception as e: except Exception as e:
return {"status": "error", "error": str(e)} return {"status": "error", "error": str(e)}
def open_editor(filepath): def open_editor(filepath):
from rp.multiplexer import create_multiplexer
try: try:
path = os.path.expanduser(filepath) path = os.path.expanduser(filepath)
editor = RPEditor(path) editor = RPEditor(path)
editor.start() editor.start()
mux_name = f"editor-{path}" mux_name = f"editor-{path}"
mux_name, mux = create_multiplexer(mux_name, show_output=True) mux_name, mux = create_multiplexer(mux_name, show_output=True)
mux.write_stdout(f"Opened editor for: {path}\n") mux.write_stdout(f"Opened editor for: {path}\n")
return {"status": "success", "message": f"Editor opened for {path}", "mux_name": mux_name}
return {
"status": "success",
"message": f"Editor opened for {path}",
"mux_name": mux_name,
}
except Exception as e: except Exception as e:
return {"status": "error", "error": str(e)} return {"status": "error", "error": str(e)}
def editor_insert_text(filepath, text, line=None, col=None, show_diff=True): def editor_insert_text(filepath, text, line=None, col=None, show_diff=True):
from rp.multiplexer import get_multiplexer
try: try:
path = os.path.expanduser(filepath) path = os.path.expanduser(filepath)
old_content = "" old_content = ""
if os.path.exists(path): if os.path.exists(path):
with open(path) as f: with open(path) as f:
old_content = f.read() old_content = f.read()
position = (line if line is not None else 0) * 1000 + (col if col is not None else 0) position = (line if line is not None else 0) * 1000 + (col if col is not None else 0)
operation = track_edit("INSERT", filepath, start_pos=position, content=text) operation = track_edit("INSERT", filepath, start_pos=position, content=text)
tracker.mark_in_progress(operation) tracker.mark_in_progress(operation)
editor = get_editor(path) editor = get_editor(path)
if line is not None and col is not None: if line is not None and col is not None:
editor.move_cursor_to(line, col) editor.move_cursor_to(line, col)
editor.insert_text(text) editor.insert_text(text)
editor.save_file() editor.save_file()
mux_name = f"editor-{path}" mux_name = f"editor-{path}"
mux = get_multiplexer(mux_name) mux = get_multiplexer(mux_name)
if mux: if mux:
location = f" at line {line}, col {col}" if line is not None and col is not None else "" location = f" at line {line}, col {col}" if line is not None and col is not None else ""
preview = text[:50] + "..." if len(text) > 50 else text preview = text[:50] + "..." if len(text) > 50 else text
mux.write_stdout(f"Inserted text{location}: {repr(preview)}\n") mux.write_stdout(f"Inserted text{location}: {repr(preview)}\n")
if show_diff and old_content: if show_diff and old_content:
with open(path) as f: with open(path) as f:
new_content = f.read() new_content = f.read()
diff_result = display_content_diff(old_content, new_content, filepath) diff_result = display_content_diff(old_content, new_content, filepath)
if diff_result["status"] == "success": if diff_result["status"] == "success":
mux.write_stdout(diff_result["visual_diff"] + "\n") mux.write_stdout(diff_result["visual_diff"] + "\n")
tracker.mark_completed(operation) tracker.mark_completed(operation)
result = {"status": "success", "message": f"Inserted text in {path}"} result = {"status": "success", "message": f"Inserted text in {path}"}
close_editor(filepath) close_editor(filepath)
@ -98,14 +87,14 @@ def editor_insert_text(filepath, text, line=None, col=None, show_diff=True):
def editor_replace_text( def editor_replace_text(
filepath, start_line, start_col, end_line, end_col, new_text, show_diff=True filepath, start_line, start_col, end_line, end_col, new_text, show_diff=True
): ):
from rp.multiplexer import get_multiplexer
try: try:
path = os.path.expanduser(filepath) path = os.path.expanduser(filepath)
old_content = "" old_content = ""
if os.path.exists(path): if os.path.exists(path):
with open(path) as f: with open(path) as f:
old_content = f.read() old_content = f.read()
start_pos = start_line * 1000 + start_col start_pos = start_line * 1000 + start_col
end_pos = end_line * 1000 + end_col end_pos = end_line * 1000 + end_col
operation = track_edit( operation = track_edit(
@ -117,11 +106,9 @@ def editor_replace_text(
old_content=old_content, old_content=old_content,
) )
tracker.mark_in_progress(operation) tracker.mark_in_progress(operation)
editor = get_editor(path) editor = get_editor(path)
editor.replace_text(start_line, start_col, end_line, end_col, new_text) editor.replace_text(start_line, start_col, end_line, end_col, new_text)
editor.save_file() editor.save_file()
mux_name = f"editor-{path}" mux_name = f"editor-{path}"
mux = get_multiplexer(mux_name) mux = get_multiplexer(mux_name)
if mux: if mux:
@ -129,14 +116,12 @@ def editor_replace_text(
mux.write_stdout( mux.write_stdout(
f"Replaced text from ({start_line},{start_col}) to ({end_line},{end_col}): {repr(preview)}\n" f"Replaced text from ({start_line},{start_col}) to ({end_line},{end_col}): {repr(preview)}\n"
) )
if show_diff and old_content: if show_diff and old_content:
with open(path) as f: with open(path) as f:
new_content = f.read() new_content = f.read()
diff_result = display_content_diff(old_content, new_content, filepath) diff_result = display_content_diff(old_content, new_content, filepath)
if diff_result["status"] == "success": if diff_result["status"] == "success":
mux.write_stdout(diff_result["visual_diff"] + "\n") mux.write_stdout(diff_result["visual_diff"] + "\n")
tracker.mark_completed(operation) tracker.mark_completed(operation)
result = {"status": "success", "message": f"Replaced text in {path}"} result = {"status": "success", "message": f"Replaced text in {path}"}
close_editor(filepath) close_editor(filepath)
@ -148,18 +133,18 @@ def editor_replace_text(
def editor_search(filepath, pattern, start_line=0): def editor_search(filepath, pattern, start_line=0):
from rp.multiplexer import get_multiplexer
try: try:
path = os.path.expanduser(filepath) path = os.path.expanduser(filepath)
editor = RPEditor(path) editor = RPEditor(path)
results = editor.search(pattern, start_line) results = editor.search(pattern, start_line)
mux_name = f"editor-{path}" mux_name = f"editor-{path}"
mux = get_multiplexer(mux_name) mux = get_multiplexer(mux_name)
if mux: if mux:
mux.write_stdout( mux.write_stdout(
f"Searched for pattern '{pattern}' from line {start_line}: {len(results)} matches\n" f"Searched for pattern '{pattern}' from line {start_line}: {len(results)} matches\n"
) )
result = {"status": "success", "results": results} result = {"status": "success", "results": results}
close_editor(filepath) close_editor(filepath)
return result return result

View File

@ -1,9 +1,8 @@
import hashlib import hashlib
import os import os
import time import time
from typing import Optional, Any
from pr.editor import RPEditor from rp.editor import RPEditor
from ..tools.patch import display_content_diff from ..tools.patch import display_content_diff
from ..ui.diff_display import get_diff_stats from ..ui.diff_display import get_diff_stats
from ..ui.edit_feedback import track_edit, tracker from ..ui.edit_feedback import track_edit, tracker
@ -17,13 +16,23 @@ def get_uid():
return _id return _id
def read_file(filepath, db_conn=None): def read_file(filepath: str, db_conn: Optional[Any] = None) -> dict:
"""
Read the contents of a file.
Args:
filepath: Path to the file to read
db_conn: Optional database connection for tracking
Returns:
dict: Status and content or error
"""
try: try:
path = os.path.expanduser(filepath) path = os.path.expanduser(filepath)
with open(path) as f: with open(path) as f:
content = f.read() content = f.read()
if db_conn: if db_conn:
from pr.tools.database import db_set from rp.tools.database import db_set
db_set("read:" + path, "true", db_conn) db_set("read:" + path, "true", db_conn)
return {"status": "success", "content": content} return {"status": "success", "content": content}
@ -31,14 +40,28 @@ def read_file(filepath, db_conn=None):
return {"status": "error", "error": str(e)} return {"status": "error", "error": str(e)}
def write_file(filepath, content, db_conn=None, show_diff=True): def write_file(
filepath: str, content: str, db_conn: Optional[Any] = None, show_diff: bool = True
) -> dict:
"""
Write content to a file.
Args:
filepath: Path to the file to write
content: Content to write
db_conn: Optional database connection for tracking
show_diff: Whether to show diff of changes
Returns:
dict: Status and message or error
"""
operation = None
try: try:
path = os.path.expanduser(filepath) path = os.path.expanduser(filepath)
old_content = "" old_content = ""
is_new_file = not os.path.exists(path) is_new_file = not os.path.exists(path)
if not is_new_file and db_conn: if not is_new_file and db_conn:
from pr.tools.database import db_get from rp.tools.database import db_get
read_status = db_get("read:" + path, db_conn) read_status = db_get("read:" + path, db_conn)
if read_status.get("status") != "success" or read_status.get("value") != "true": if read_status.get("status") != "success" or read_status.get("value") != "true":
@ -46,59 +69,48 @@ def write_file(filepath, content, db_conn=None, show_diff=True):
"status": "error", "status": "error",
"error": "File must be read before writing. Please read the file first.", "error": "File must be read before writing. Please read the file first.",
} }
if not is_new_file: if not is_new_file:
with open(path) as f: with open(path) as f:
old_content = f.read() old_content = f.read()
operation = track_edit("WRITE", filepath, content=content, old_content=old_content) operation = track_edit("WRITE", filepath, content=content, old_content=old_content)
tracker.mark_in_progress(operation) tracker.mark_in_progress(operation)
if show_diff and (not is_new_file):
if show_diff and not is_new_file:
diff_result = display_content_diff(old_content, content, filepath) diff_result = display_content_diff(old_content, content, filepath)
if diff_result["status"] == "success": if diff_result["status"] == "success":
print(diff_result["visual_diff"]) print(diff_result["visual_diff"])
editor = RPEditor(path) editor = RPEditor(path)
editor.set_text(content) editor.set_text(content)
editor.save_file() editor.save_file()
if os.path.exists(path) and db_conn: if os.path.exists(path) and db_conn:
try: try:
cursor = db_conn.cursor() cursor = db_conn.cursor()
file_hash = hashlib.md5(old_content.encode()).hexdigest() file_hash = hashlib.md5(old_content.encode()).hexdigest()
cursor.execute( cursor.execute(
"SELECT MAX(version) FROM file_versions WHERE filepath = ?", "SELECT MAX(version) FROM file_versions WHERE filepath = ?", (filepath,)
(filepath,),
) )
result = cursor.fetchone() result = cursor.fetchone()
version = (result[0] + 1) if result[0] else 1 version = result[0] + 1 if result[0] else 1
cursor.execute( cursor.execute(
"""INSERT INTO file_versions (filepath, content, hash, timestamp, version) "INSERT INTO file_versions (filepath, content, hash, timestamp, version)\n VALUES (?, ?, ?, ?, ?)",
VALUES (?, ?, ?, ?, ?)""",
(filepath, old_content, file_hash, time.time(), version), (filepath, old_content, file_hash, time.time(), version),
) )
db_conn.commit() db_conn.commit()
except Exception: except Exception:
pass pass
tracker.mark_completed(operation) tracker.mark_completed(operation)
message = f"File written to {path}" message = f"File written to {path}"
if show_diff and not is_new_file: if show_diff and (not is_new_file):
stats = get_diff_stats(old_content, content) stats = get_diff_stats(old_content, content)
message += f" ({stats['insertions']}+ {stats['deletions']}-)" message += f" ({stats['insertions']}+ {stats['deletions']}-)"
return {"status": "success", "message": message} return {"status": "success", "message": message}
except Exception as e: except Exception as e:
if "operation" in locals(): if operation is not None:
tracker.mark_failed(operation) tracker.mark_failed(operation)
return {"status": "error", "error": str(e)} return {"status": "error", "error": str(e)}
def list_directory(path=".", recursive=False): def list_directory(path=".", recursive=False):
"""List files and directories in the specified path."""
try: try:
path = os.path.expanduser(path) path = os.path.expanduser(path)
items = [] items = []
@ -107,11 +119,7 @@ def list_directory(path=".", recursive=False):
for name in files: for name in files:
item_path = os.path.join(root, name) item_path = os.path.join(root, name)
items.append( items.append(
{ {"path": item_path, "type": "file", "size": os.path.getsize(item_path)}
"path": item_path,
"type": "file",
"size": os.path.getsize(item_path),
}
) )
for name in dirs: for name in dirs:
items.append({"path": os.path.join(root, name), "type": "directory"}) items.append({"path": os.path.join(root, name), "type": "directory"})
@ -122,7 +130,7 @@ def list_directory(path=".", recursive=False):
{ {
"name": item, "name": item,
"type": "directory" if os.path.isdir(item_path) else "file", "type": "directory" if os.path.isdir(item_path) else "file",
"size": (os.path.getsize(item_path) if os.path.isfile(item_path) else None), "size": os.path.getsize(item_path) if os.path.isfile(item_path) else None,
} }
) )
return {"status": "success", "items": items} return {"status": "success", "items": items}
@ -139,6 +147,7 @@ def mkdir(path):
def chdir(path): def chdir(path):
"""Change the current working directory."""
try: try:
os.chdir(os.path.expanduser(path)) os.chdir(os.path.expanduser(path))
return {"status": "success", "new_path": os.getcwd()} return {"status": "success", "new_path": os.getcwd()}
@ -147,13 +156,23 @@ def chdir(path):
def getpwd(): def getpwd():
"""Get the current working directory."""
try: try:
return {"status": "success", "path": os.getcwd()} return {"status": "success", "path": os.getcwd()}
except Exception as e: except Exception as e:
return {"status": "error", "error": str(e)} return {"status": "error", "error": str(e)}
def index_source_directory(path): def index_source_directory(path: str) -> dict:
"""
Index directory recursively and read all source files.
Args:
path: Path to index
Returns:
dict: Status and indexed files or error
"""
extensions = [ extensions = [
".py", ".py",
".js", ".js",
@ -176,7 +195,7 @@ def index_source_directory(path):
try: try:
for root, _, files in os.walk(os.path.expanduser(path)): for root, _, files in os.walk(os.path.expanduser(path)):
for file in files: for file in files:
if any(file.endswith(ext) for ext in extensions): if any((file.endswith(ext) for ext in extensions)):
filepath = os.path.join(root, file) filepath = os.path.join(root, file)
try: try:
with open(filepath, encoding="utf-8") as f: with open(filepath, encoding="utf-8") as f:
@ -189,13 +208,27 @@ def index_source_directory(path):
return {"status": "error", "error": str(e)} return {"status": "error", "error": str(e)}
def search_replace(filepath, old_string, new_string, db_conn=None): def search_replace(
filepath: str, old_string: str, new_string: str, db_conn: Optional[Any] = None
) -> dict:
"""
Search and replace text in a file.
Args:
filepath: Path to the file
old_string: String to replace
new_string: Replacement string
db_conn: Optional database connection for tracking
Returns:
dict: Status and message or error
"""
try: try:
path = os.path.expanduser(filepath) path = os.path.expanduser(filepath)
if not os.path.exists(path): if not os.path.exists(path):
return {"status": "error", "error": "File does not exist"} return {"status": "error", "error": "File does not exist"}
if db_conn: if db_conn:
from pr.tools.database import db_get from rp.tools.database import db_get
read_status = db_get("read:" + path, db_conn) read_status = db_get("read:" + path, db_conn)
if read_status.get("status") != "success" or read_status.get("value") != "true": if read_status.get("status") != "success" or read_status.get("value") != "true":
@ -246,10 +279,11 @@ def open_editor(filepath):
def editor_insert_text(filepath, text, line=None, col=None, show_diff=True, db_conn=None): def editor_insert_text(filepath, text, line=None, col=None, show_diff=True, db_conn=None):
operation = None
try: try:
path = os.path.expanduser(filepath) path = os.path.expanduser(filepath)
if db_conn: if db_conn:
from pr.tools.database import db_get from rp.tools.database import db_get
read_status = db_get("read:" + path, db_conn) read_status = db_get("read:" + path, db_conn)
if read_status.get("status") != "success" or read_status.get("value") != "true": if read_status.get("status") != "success" or read_status.get("value") != "true":
@ -257,51 +291,40 @@ def editor_insert_text(filepath, text, line=None, col=None, show_diff=True, db_c
"status": "error", "status": "error",
"error": "File must be read before writing. Please read the file first.", "error": "File must be read before writing. Please read the file first.",
} }
old_content = "" old_content = ""
if os.path.exists(path): if os.path.exists(path):
with open(path) as f: with open(path) as f:
old_content = f.read() old_content = f.read()
position = (line if line is not None else 0) * 1000 + (col if col is not None else 0) position = (line if line is not None else 0) * 1000 + (col if col is not None else 0)
operation = track_edit("INSERT", filepath, start_pos=position, content=text) operation = track_edit("INSERT", filepath, start_pos=position, content=text)
tracker.mark_in_progress(operation) tracker.mark_in_progress(operation)
editor = get_editor(path) editor = get_editor(path)
if line is not None and col is not None: if line is not None and col is not None:
editor.move_cursor_to(line, col) editor.move_cursor_to(line, col)
editor.insert_text(text) editor.insert_text(text)
editor.save_file() editor.save_file()
if show_diff and old_content: if show_diff and old_content:
with open(path) as f: with open(path) as f:
new_content = f.read() new_content = f.read()
diff_result = display_content_diff(old_content, new_content, filepath) diff_result = display_content_diff(old_content, new_content, filepath)
if diff_result["status"] == "success": if diff_result["status"] == "success":
print(diff_result["visual_diff"]) print(diff_result["visual_diff"])
tracker.mark_completed(operation) tracker.mark_completed(operation)
return {"status": "success", "message": f"Inserted text in {path}"} return {"status": "success", "message": f"Inserted text in {path}"}
except Exception as e: except Exception as e:
if "operation" in locals(): if operation is not None:
tracker.mark_failed(operation) tracker.mark_failed(operation)
return {"status": "error", "error": str(e)} return {"status": "error", "error": str(e)}
def editor_replace_text( def editor_replace_text(
filepath, filepath, start_line, start_col, end_line, end_col, new_text, show_diff=True, db_conn=None
start_line,
start_col,
end_line,
end_col,
new_text,
show_diff=True,
db_conn=None,
): ):
try: try:
operation = None
path = os.path.expanduser(filepath) path = os.path.expanduser(filepath)
if db_conn: if db_conn:
from pr.tools.database import db_get from rp.tools.database import db_get
read_status = db_get("read:" + path, db_conn) read_status = db_get("read:" + path, db_conn)
if read_status.get("status") != "success" or read_status.get("value") != "true": if read_status.get("status") != "success" or read_status.get("value") != "true":
@ -309,12 +332,10 @@ def editor_replace_text(
"status": "error", "status": "error",
"error": "File must be read before writing. Please read the file first.", "error": "File must be read before writing. Please read the file first.",
} }
old_content = "" old_content = ""
if os.path.exists(path): if os.path.exists(path):
with open(path) as f: with open(path) as f:
old_content = f.read() old_content = f.read()
start_pos = start_line * 1000 + start_col start_pos = start_line * 1000 + start_col
end_pos = end_line * 1000 + end_col end_pos = end_line * 1000 + end_col
operation = track_edit( operation = track_edit(
@ -326,22 +347,19 @@ def editor_replace_text(
old_content=old_content, old_content=old_content,
) )
tracker.mark_in_progress(operation) tracker.mark_in_progress(operation)
editor = get_editor(path) editor = get_editor(path)
editor.replace_text(start_line, start_col, end_line, end_col, new_text) editor.replace_text(start_line, start_col, end_line, end_col, new_text)
editor.save_file() editor.save_file()
if show_diff and old_content: if show_diff and old_content:
with open(path) as f: with open(path) as f:
new_content = f.read() new_content = f.read()
diff_result = display_content_diff(old_content, new_content, filepath) diff_result = display_content_diff(old_content, new_content, filepath)
if diff_result["status"] == "success": if diff_result["status"] == "success":
print(diff_result["visual_diff"]) print(diff_result["visual_diff"])
tracker.mark_completed(operation) tracker.mark_completed(operation)
return {"status": "success", "message": f"Replaced text in {path}"} return {"status": "success", "message": f"Replaced text in {path}"}
except Exception as e: except Exception as e:
if "operation" in locals(): if operation is not None:
tracker.mark_failed(operation) tracker.mark_failed(operation)
return {"status": "error", "error": str(e)} return {"status": "error", "error": str(e)}

View File

@ -1,15 +1,20 @@
import subprocess import subprocess
import threading import threading
import importlib
from pr.multiplexer import (
close_multiplexer,
create_multiplexer,
get_all_multiplexer_states,
get_multiplexer,
)
def start_interactive_session(command, session_name=None, process_type="generic"): def _get_multiplexer_functions():
"""Lazy import multiplexer functions to avoid circular imports."""
multiplexer = importlib.import_module("rp.multiplexer")
return {
"create_multiplexer": multiplexer.create_multiplexer,
"get_multiplexer": multiplexer.get_multiplexer,
"close_multiplexer": multiplexer.close_multiplexer,
"get_all_multiplexer_states": multiplexer.get_all_multiplexer_states,
}
def start_interactive_session(command, session_name=None, process_type="generic", cwd=None):
""" """
Start an interactive session in a dedicated multiplexer. Start an interactive session in a dedicated multiplexer.
@ -17,17 +22,16 @@ def start_interactive_session(command, session_name=None, process_type="generic"
command: The command to run (list or string) command: The command to run (list or string)
session_name: Optional name for the session session_name: Optional name for the session
process_type: Type of process (ssh, vim, apt, etc.) process_type: Type of process (ssh, vim, apt, etc.)
cwd: Current working directory for the command
Returns: Returns:
session_name: The name of the created session session_name: The name of the created session
""" """
name, mux = create_multiplexer(session_name) funcs = _get_multiplexer_functions()
name, mux = funcs["create_multiplexer"](session_name)
mux.update_metadata("process_type", process_type) mux.update_metadata("process_type", process_type)
# Start the process
if isinstance(command, str): if isinstance(command, str):
command = command.split() command = command.split()
try: try:
process = subprocess.Popen( process = subprocess.Popen(
command, command,
@ -36,43 +40,48 @@ def start_interactive_session(command, session_name=None, process_type="generic"
stderr=subprocess.PIPE, stderr=subprocess.PIPE,
text=True, text=True,
bufsize=1, bufsize=1,
cwd=cwd,
) )
mux.process = process mux.process = process
mux.update_metadata("pid", process.pid) mux.update_metadata("pid", process.pid)
from rp.tools.process_handlers import detect_process_type
# Set process type and handler
detected_type = detect_process_type(command) detected_type = detect_process_type(command)
mux.set_process_type(detected_type) mux.set_process_type(detected_type)
# Start output readers
stdout_thread = threading.Thread( stdout_thread = threading.Thread(
target=_read_output, args=(process.stdout, mux.write_stdout), daemon=True target=_read_output, args=(process.stdout, mux.write_stdout, detected_type), daemon=True
) )
stderr_thread = threading.Thread( stderr_thread = threading.Thread(
target=_read_output, args=(process.stderr, mux.write_stderr), daemon=True target=_read_output, args=(process.stderr, mux.write_stderr, detected_type), daemon=True
) )
stdout_thread.start() stdout_thread.start()
stderr_thread.start() stderr_thread.start()
mux.stdout_thread = stdout_thread mux.stdout_thread = stdout_thread
mux.stderr_thread = stderr_thread mux.stderr_thread = stderr_thread
return name return name
except Exception as e: except Exception as e:
close_multiplexer(name) funcs["close_multiplexer"](name)
raise e raise e
def _read_output(stream, write_func): def _read_output(stream, write_func, process_type):
"""Read from a stream and write to multiplexer buffer.""" """Read from a stream and write to multiplexer buffer."""
try: if process_type in ["vim", "ssh"]:
for line in iter(stream.readline, ""): try:
if line: while True:
write_func(line.rstrip("\n")) char = stream.read(1)
except Exception as e: if not char:
print(f"Error reading output: {e}") break
write_func(char)
except Exception as e:
print(f"Error reading output: {e}")
else:
try:
for line in iter(stream.readline, ""):
if line:
write_func(line.rstrip("\n"))
except Exception as e:
print(f"Error reading output: {e}")
def send_input_to_session(session_name, input_data): def send_input_to_session(session_name, input_data):
@ -83,13 +92,12 @@ def send_input_to_session(session_name, input_data):
session_name: Name of the session session_name: Name of the session
input_data: Input string to send input_data: Input string to send
""" """
mux = get_multiplexer(session_name) funcs = _get_multiplexer_functions()
mux = funcs["get_multiplexer"](session_name)
if not mux: if not mux:
raise ValueError(f"Session {session_name} not found") raise ValueError(f"Session {session_name} not found")
if not hasattr(mux, "process") or mux.process.poll() is not None: if not hasattr(mux, "process") or mux.process.poll() is not None:
raise ValueError(f"Session {session_name} is not active") raise ValueError(f"Session {session_name} is not active")
try: try:
mux.process.stdin.write(input_data + "\n") mux.process.stdin.write(input_data + "\n")
mux.process.stdin.flush() mux.process.stdin.flush()
@ -98,23 +106,13 @@ def send_input_to_session(session_name, input_data):
def read_session_output(session_name, lines=None): def read_session_output(session_name, lines=None):
""" funcs = _get_multiplexer_functions()
Read output from a session. "\n Read output from a session.\n\n Args:\n session_name: Name of the session\n lines: Number of recent lines to return (None for all)\n\n Returns:\n dict: {'stdout': str, 'stderr': str}\n "
mux = funcs["get_multiplexer"](session_name)
Args:
session_name: Name of the session
lines: Number of recent lines to return (None for all)
Returns:
dict: {'stdout': str, 'stderr': str}
"""
mux = get_multiplexer(session_name)
if not mux: if not mux:
raise ValueError(f"Session {session_name} not found") raise ValueError(f"Session {session_name} not found")
output = mux.get_all_output() output = mux.get_all_output()
if lines is not None: if lines is not None:
# Return last N lines
stdout_lines = output["stdout"].split("\n")[-lines:] if output["stdout"] else [] stdout_lines = output["stdout"].split("\n")[-lines:] if output["stdout"] else []
stderr_lines = output["stderr"].split("\n")[-lines:] if output["stderr"] else [] stderr_lines = output["stderr"].split("\n")[-lines:] if output["stderr"] else []
output = {"stdout": "\n".join(stdout_lines), "stderr": "\n".join(stderr_lines)} output = {"stdout": "\n".join(stdout_lines), "stderr": "\n".join(stderr_lines)}
@ -128,7 +126,8 @@ def list_active_sessions():
Returns: Returns:
dict: Session states dict: Session states
""" """
return get_all_multiplexer_states() funcs = _get_multiplexer_functions()
return funcs["get_all_multiplexer_states"]()
def get_session_status(session_name): def get_session_status(session_name):
@ -141,10 +140,10 @@ def get_session_status(session_name):
Returns: Returns:
dict: Session metadata and status dict: Session metadata and status
""" """
mux = get_multiplexer(session_name) funcs = _get_multiplexer_functions()
mux = funcs["get_multiplexer"](session_name)
if not mux: if not mux:
return None return None
status = mux.get_metadata() status = mux.get_metadata()
status["is_active"] = hasattr(mux, "process") and mux.process.poll() is None status["is_active"] = hasattr(mux, "process") and mux.process.poll() is None
if status["is_active"]: if status["is_active"]:
@ -161,10 +160,11 @@ def close_interactive_session(session_name):
Close an interactive session. Close an interactive session.
""" """
try: try:
mux = get_multiplexer(session_name) funcs = _get_multiplexer_functions()
mux = funcs["get_multiplexer"](session_name)
if mux: if mux:
mux.process.kill() mux.process.kill()
close_multiplexer(session_name) funcs["close_multiplexer"](session_name)
return {"status": "success"} return {"status": "success"}
except Exception as e: except Exception as e:
return {"status": "error", "error": str(e)} return {"status": "error", "error": str(e)}

26
rp/tools/lsp.py Normal file
View File

@ -0,0 +1,26 @@
from typing import Dict, Any
def get_diagnostics(filepath: str) -> Dict[str, Any]:
"""
Get LSP diagnostics for a file.
Args:
filepath: The path to the file.
Returns:
A dictionary with the status and a list of diagnostics.
"""
# This is a placeholder implementation.
# A real implementation would require a running LSP server and a client.
return {
"status": "success",
"diagnostics": [
{
"line": 1,
"character": 0,
"message": "Placeholder diagnostic: This is not a real error.",
"severity": 1,
}
],
}

View File

@ -2,8 +2,7 @@ import os
import time import time
import uuid import uuid
from typing import Any, Dict from typing import Any, Dict
from rp.memory.knowledge_store import KnowledgeEntry, KnowledgeStore
from pr.memory.knowledge_store import KnowledgeEntry, KnowledgeStore
def add_knowledge_entry( def add_knowledge_entry(
@ -13,10 +12,8 @@ def add_knowledge_entry(
try: try:
db_path = os.path.expanduser("~/.assistant_db.sqlite") db_path = os.path.expanduser("~/.assistant_db.sqlite")
store = KnowledgeStore(db_path) store = KnowledgeStore(db_path)
if entry_id is None: if entry_id is None:
entry_id = str(uuid.uuid4())[:16] entry_id = str(uuid.uuid4())[:16]
entry = KnowledgeEntry( entry = KnowledgeEntry(
entry_id=entry_id, entry_id=entry_id,
category=category, category=category,
@ -25,7 +22,6 @@ def add_knowledge_entry(
created_at=time.time(), created_at=time.time(),
updated_at=time.time(), updated_at=time.time(),
) )
store.add_entry(entry) store.add_entry(entry)
return {"status": "success", "entry_id": entry_id} return {"status": "success", "entry_id": entry_id}
except Exception as e: except Exception as e:
@ -37,7 +33,6 @@ def get_knowledge_entry(entry_id: str) -> Dict[str, Any]:
try: try:
db_path = os.path.expanduser("~/.assistant_db.sqlite") db_path = os.path.expanduser("~/.assistant_db.sqlite")
store = KnowledgeStore(db_path) store = KnowledgeStore(db_path)
entry = store.get_entry(entry_id) entry = store.get_entry(entry_id)
if entry: if entry:
return {"status": "success", "entry": entry.to_dict()} return {"status": "success", "entry": entry.to_dict()}
@ -52,7 +47,6 @@ def search_knowledge(query: str, category: str = None, top_k: int = 5) -> Dict[s
try: try:
db_path = os.path.expanduser("~/.assistant_db.sqlite") db_path = os.path.expanduser("~/.assistant_db.sqlite")
store = KnowledgeStore(db_path) store = KnowledgeStore(db_path)
entries = store.search_entries(query, category, top_k) entries = store.search_entries(query, category, top_k)
results = [entry.to_dict() for entry in entries] results = [entry.to_dict() for entry in entries]
return {"status": "success", "results": results} return {"status": "success", "results": results}
@ -65,7 +59,6 @@ def get_knowledge_by_category(category: str, limit: int = 20) -> Dict[str, Any]:
try: try:
db_path = os.path.expanduser("~/.assistant_db.sqlite") db_path = os.path.expanduser("~/.assistant_db.sqlite")
store = KnowledgeStore(db_path) store = KnowledgeStore(db_path)
entries = store.get_by_category(category, limit) entries = store.get_by_category(category, limit)
results = [entry.to_dict() for entry in entries] results = [entry.to_dict() for entry in entries]
return {"status": "success", "entries": results} return {"status": "success", "entries": results}
@ -78,13 +71,8 @@ def update_knowledge_importance(entry_id: str, importance_score: float) -> Dict[
try: try:
db_path = os.path.expanduser("~/.assistant_db.sqlite") db_path = os.path.expanduser("~/.assistant_db.sqlite")
store = KnowledgeStore(db_path) store = KnowledgeStore(db_path)
store.update_importance(entry_id, importance_score) store.update_importance(entry_id, importance_score)
return { return {"status": "success", "entry_id": entry_id, "importance_score": importance_score}
"status": "success",
"entry_id": entry_id,
"importance_score": importance_score,
}
except Exception as e: except Exception as e:
return {"status": "error", "error": str(e)} return {"status": "error", "error": str(e)}
@ -94,7 +82,6 @@ def delete_knowledge_entry(entry_id: str) -> Dict[str, Any]:
try: try:
db_path = os.path.expanduser("~/.assistant_db.sqlite") db_path = os.path.expanduser("~/.assistant_db.sqlite")
store = KnowledgeStore(db_path) store = KnowledgeStore(db_path)
success = store.delete_entry(entry_id) success = store.delete_entry(entry_id)
return {"status": "success" if success else "not_found", "entry_id": entry_id} return {"status": "success" if success else "not_found", "entry_id": entry_id}
except Exception as e: except Exception as e:
@ -106,7 +93,6 @@ def get_knowledge_statistics() -> Dict[str, Any]:
try: try:
db_path = os.path.expanduser("~/.assistant_db.sqlite") db_path = os.path.expanduser("~/.assistant_db.sqlite")
store = KnowledgeStore(db_path) store = KnowledgeStore(db_path)
stats = store.get_statistics() stats = store.get_statistics()
return {"status": "success", "statistics": stats} return {"status": "success", "statistics": stats}
except Exception as e: except Exception as e:

View File

@ -2,15 +2,24 @@ import difflib
import os import os
import subprocess import subprocess
import tempfile import tempfile
from ..ui.diff_display import display_diff, get_diff_stats from ..ui.diff_display import display_diff, get_diff_stats
def apply_patch(filepath, patch_content, db_conn=None): def apply_patch(filepath, patch_content, db_conn=None):
"""Apply a patch to a file.
Args:
filepath: Path to the file to patch.
patch_content: The patch content as a string.
db_conn: Database connection (optional).
Returns:
Dict with status and output.
"""
try: try:
path = os.path.expanduser(filepath) path = os.path.expanduser(filepath)
if db_conn: if db_conn:
from pr.tools.database import db_get from rp.tools.database import db_get
read_status = db_get("read:" + path, db_conn) read_status = db_get("read:" + path, db_conn)
if read_status.get("status") != "success" or read_status.get("value") != "true": if read_status.get("status") != "success" or read_status.get("value") != "true":
@ -18,16 +27,11 @@ def apply_patch(filepath, patch_content, db_conn=None):
"status": "error", "status": "error",
"error": "File must be read before writing. Please read the file first.", "error": "File must be read before writing. Please read the file first.",
} }
# Write patch to temp file
with tempfile.NamedTemporaryFile(mode="w", delete=False, suffix=".patch") as f: with tempfile.NamedTemporaryFile(mode="w", delete=False, suffix=".patch") as f:
f.write(patch_content) f.write(patch_content)
patch_file = f.name patch_file = f.name
# Run patch command
result = subprocess.run( result = subprocess.run(
["patch", path, patch_file], ["patch", path, patch_file], capture_output=True, text=True, cwd=os.path.dirname(path)
capture_output=True,
text=True,
cwd=os.path.dirname(path),
) )
os.unlink(patch_file) os.unlink(patch_file)
if result.returncode == 0: if result.returncode == 0:
@ -41,13 +45,25 @@ def apply_patch(filepath, patch_content, db_conn=None):
def create_diff( def create_diff(
file1, file2, fromfile="file1", tofile="file2", visual=False, format_type="unified" file1, file2, fromfile="file1", tofile="file2", visual=False, format_type="unified"
): ):
"""Create a unified diff between two files.
Args:
file1: Path to the first file.
file2: Path to the second file.
fromfile: Label for the first file.
tofile: Label for the second file.
visual: Whether to include visual diff.
format_type: Diff format type.
Returns:
Dict with status and diff content.
"""
try: try:
path1 = os.path.expanduser(file1) path1 = os.path.expanduser(file1)
path2 = os.path.expanduser(file2) path2 = os.path.expanduser(file2)
with open(path1) as f1, open(path2) as f2: with open(path1) as f1, open(path2) as f2:
content1 = f1.read() content1 = f1.read()
content2 = f2.read() content2 = f2.read()
if visual: if visual:
visual_diff = display_diff(content1, content2, fromfile, format_type) visual_diff = display_diff(content1, content2, fromfile, format_type)
stats = get_diff_stats(content1, content2) stats = get_diff_stats(content1, content2)
@ -75,15 +91,12 @@ def display_file_diff(filepath1, filepath2, format_type="unified", context_lines
try: try:
path1 = os.path.expanduser(filepath1) path1 = os.path.expanduser(filepath1)
path2 = os.path.expanduser(filepath2) path2 = os.path.expanduser(filepath2)
with open(path1) as f1: with open(path1) as f1:
old_content = f1.read() old_content = f1.read()
with open(path2) as f2: with open(path2) as f2:
new_content = f2.read() new_content = f2.read()
visual_diff = display_diff(old_content, new_content, filepath1, format_type) visual_diff = display_diff(old_content, new_content, filepath1, format_type)
stats = get_diff_stats(old_content, new_content) stats = get_diff_stats(old_content, new_content)
return {"status": "success", "visual_diff": visual_diff, "stats": stats} return {"status": "success", "visual_diff": visual_diff, "stats": stats}
except Exception as e: except Exception as e:
return {"status": "error", "error": str(e)} return {"status": "error", "error": str(e)}
@ -93,7 +106,6 @@ def display_content_diff(old_content, new_content, filename="file", format_type=
try: try:
visual_diff = display_diff(old_content, new_content, filename, format_type) visual_diff = display_diff(old_content, new_content, filename, format_type)
stats = get_diff_stats(old_content, new_content) stats = get_diff_stats(old_content, new_content)
return {"status": "success", "visual_diff": visual_diff, "stats": stats} return {"status": "success", "visual_diff": visual_diff, "stats": stats}
except Exception as e: except Exception as e:
return {"status": "error", "error": str(e)} return {"status": "error", "error": str(e)}

View File

@ -43,12 +43,12 @@ class AptHandler(ProcessHandler):
"cancelled": [], "cancelled": [],
} }
self.prompt_patterns = [ self.prompt_patterns = [
(r"Do you want to continue\?", "confirmation"), ("Do you want to continue\\?", "confirmation"),
(r"After this operation.*installed\.", "size_info"), ("After this operation.*installed\\.", "size_info"),
(r"Need to get.*B of archives\.", "download_info"), ("Need to get.*B of archives\\.", "download_info"),
(r"Unpacking.*Configuring", "configuring"), ("Unpacking.*Configuring", "configuring"),
(r"Setting up", "setting_up"), ("Setting up", "setting_up"),
(r"E:\s", "error"), ("E:\\s", "error"),
] ]
def get_process_type(self): def get_process_type(self):
@ -57,17 +57,12 @@ class AptHandler(ProcessHandler):
def update_state(self, output): def update_state(self, output):
"""Update state based on apt output patterns.""" """Update state based on apt output patterns."""
output_lower = output.lower() output_lower = output.lower()
# Check for completion
if "processing triggers" in output_lower or "done" in output_lower: if "processing triggers" in output_lower or "done" in output_lower:
self.current_state = "completed" self.current_state = "completed"
# Check for confirmation prompts
elif "do you want to continue" in output_lower: elif "do you want to continue" in output_lower:
self.current_state = "waiting_confirmation" self.current_state = "waiting_confirmation"
# Check for installation progress
elif "setting up" in output_lower or "unpacking" in output_lower: elif "setting up" in output_lower or "unpacking" in output_lower:
self.current_state = "installing" self.current_state = "installing"
# Check for errors
elif "e:" in output_lower or "error" in output_lower: elif "e:" in output_lower or "error" in output_lower:
self.current_state = "error" self.current_state = "error"
@ -93,17 +88,13 @@ class VimHandler(ProcessHandler):
"exiting": [], "exiting": [],
} }
self.prompt_patterns = [ self.prompt_patterns = [
(r"-- INSERT --", "insert_mode"), ("-- INSERT --", "insert_mode"),
(r"-- VISUAL --", "visual_mode"), ("-- VISUAL --", "visual_mode"),
(r":", "command_mode"), (":", "command_mode"),
(r"Press ENTER", "waiting_enter"), ("Press ENTER", "waiting_enter"),
(r"Saved", "saved"), ("Saved", "saved"),
] ]
self.mode_indicators = { self.mode_indicators = {"insert": "-- INSERT --", "visual": "-- VISUAL --", "command": ":"}
"insert": "-- INSERT --",
"visual": "-- VISUAL --",
"command": ":",
}
def get_process_type(self): def get_process_type(self):
return "vim" return "vim"
@ -119,7 +110,6 @@ class VimHandler(ProcessHandler):
elif "Press ENTER" in output: elif "Press ENTER" in output:
self.current_state = "waiting_enter" self.current_state = "waiting_enter"
else: else:
# Default to normal mode if no specific indicators
self.current_state = "normal_mode" self.current_state = "normal_mode"
def get_prompt_suggestions(self): def get_prompt_suggestions(self):
@ -149,13 +139,13 @@ class SSHHandler(ProcessHandler):
"disconnected": [], "disconnected": [],
} }
self.prompt_patterns = [ self.prompt_patterns = [
(r"password:", "password_prompt"), ("password:", "password_prompt"),
(r"yes/no", "host_key_prompt"), ("yes/no", "host_key_prompt"),
(r"Permission denied", "auth_failed"), ("Permission denied", "auth_failed"),
(r"Welcome to", "connected"), ("Welcome to", "connected"),
(r"\$", "shell_prompt"), ("\\$", "shell_prompt"),
(r"\#", "root_shell_prompt"), ("\\#", "root_shell_prompt"),
(r"Connection closed", "disconnected"), ("Connection closed", "disconnected"),
] ]
def get_process_type(self): def get_process_type(self):
@ -164,7 +154,6 @@ class SSHHandler(ProcessHandler):
def update_state(self, output): def update_state(self, output):
"""Update state based on SSH connection output.""" """Update state based on SSH connection output."""
output_lower = output.lower() output_lower = output.lower()
if "permission denied" in output_lower: if "permission denied" in output_lower:
self.current_state = "failed" self.current_state = "failed"
elif "password:" in output_lower: elif "password:" in output_lower:
@ -183,7 +172,7 @@ class SSHHandler(ProcessHandler):
suggestions = super().get_prompt_suggestions() suggestions = super().get_prompt_suggestions()
if self.current_state == "auth_prompt": if self.current_state == "auth_prompt":
if "password:" in self.multiplexer.get_all_output()["stdout"]: if "password:" in self.multiplexer.get_all_output()["stdout"]:
suggestions.extend(["<password>"]) # Placeholder for actual password suggestions.extend(["<password>"])
elif "yes/no" in self.multiplexer.get_all_output()["stdout"]: elif "yes/no" in self.multiplexer.get_all_output()["stdout"]:
suggestions.extend(["yes", "no"]) suggestions.extend(["yes", "no"])
return suggestions return suggestions
@ -201,12 +190,12 @@ class GenericProcessHandler(ProcessHandler):
"completed": [], "completed": [],
} }
self.prompt_patterns = [ self.prompt_patterns = [
(r"\?\s*$", "waiting_input"), # Lines ending with ? ("\\?\\s*$", "waiting_input"),
(r">\s*$", "waiting_input"), # Lines ending with > (">\\s*$", "waiting_input"),
(r":\s*$", "waiting_input"), # Lines ending with : (":\\s*$", "waiting_input"),
(r"done", "completed"), ("done", "completed"),
(r"finished", "completed"), ("finished", "completed"),
(r"exit code", "completed"), ("exit code", "completed"),
] ]
def get_process_type(self): def get_process_type(self):
@ -215,16 +204,14 @@ class GenericProcessHandler(ProcessHandler):
def update_state(self, output): def update_state(self, output):
"""Basic state detection for generic processes.""" """Basic state detection for generic processes."""
output_lower = output.lower() output_lower = output.lower()
if any((pattern in output_lower for pattern in ["done", "finished", "complete"])):
if any(pattern in output_lower for pattern in ["done", "finished", "complete"]):
self.current_state = "completed" self.current_state = "completed"
elif any(output.strip().endswith(char) for char in ["?", ">", ":"]): elif any((output.strip().endswith(char) for char in ["?", ">", ":"])):
self.current_state = "waiting_input" self.current_state = "waiting_input"
else: else:
self.current_state = "running" self.current_state = "running"
# Handler registry
_handler_classes = { _handler_classes = {
"apt": AptHandler, "apt": AptHandler,
"vim": VimHandler, "vim": VimHandler,
@ -243,7 +230,6 @@ def detect_process_type(command):
"""Detect process type from command.""" """Detect process type from command."""
command_str = " ".join(command) if isinstance(command, list) else command command_str = " ".join(command) if isinstance(command, list) else command
command_lower = command_str.lower() command_lower = command_str.lower()
if "apt" in command_lower or "apt-get" in command_lower: if "apt" in command_lower or "apt-get" in command_lower:
return "apt" return "apt"
elif "vim" in command_lower or "vi " in command_lower: elif "vim" in command_lower or "vi " in command_lower:
@ -259,7 +245,6 @@ def detect_process_type(command):
"""Detect process type from command.""" """Detect process type from command."""
command_str = " ".join(command) if isinstance(command, list) else command command_str = " ".join(command) if isinstance(command, list) else command
command_lower = command_str.lower() command_lower = command_str.lower()
if "apt" in command_lower or "apt-get" in command_lower: if "apt" in command_lower or "apt-get" in command_lower:
return "apt" return "apt"
elif "vim" in command_lower or "vi " in command_lower: elif "vim" in command_lower or "vi " in command_lower:

View File

@ -9,58 +9,50 @@ class PromptDetector:
self.prompt_patterns = self._load_prompt_patterns() self.prompt_patterns = self._load_prompt_patterns()
self.state_machines = self._load_state_machines() self.state_machines = self._load_state_machines()
self.session_states = {} self.session_states = {}
self.timeout_configs = { self.timeout_configs = {"default": 30, "apt": 300, "ssh": 60, "vim": 3600}
"default": 30, # 30 seconds default timeout
"apt": 300, # 5 minutes for apt operations
"ssh": 60, # 1 minute for SSH connections
"vim": 3600, # 1 hour for vim sessions
}
def _load_prompt_patterns(self): def _load_prompt_patterns(self):
"""Load regex patterns for detecting various prompts.""" """Load regex patterns for detecting various prompts."""
return { return {
"bash_prompt": [ "bash_prompt": [
re.compile(r"[\w\-\.]+@[\w\-\.]+:.*[\$#]\s*$"), re.compile("[\\w\\-\\.]+@[\\w\\-\\.]+:.*[\\$#]\\s*$"),
re.compile(r"\$\s*$"), re.compile("\\$\\s*$"),
re.compile(r"#\s*$"), re.compile("#\\s*$"),
re.compile(r">\s*$"), # Continuation prompt re.compile(">\\s*$"),
], ],
"confirmation": [ "confirmation": [
re.compile(r"[Yy]/[Nn]", re.IGNORECASE), re.compile("[Yy]/[Nn]", re.IGNORECASE),
re.compile(r"[Yy]es/[Nn]o", re.IGNORECASE), re.compile("[Yy]es/[Nn]o", re.IGNORECASE),
re.compile(r"continue\?", re.IGNORECASE), re.compile("continue\\?", re.IGNORECASE),
re.compile(r"proceed\?", re.IGNORECASE), re.compile("proceed\\?", re.IGNORECASE),
], ],
"password": [ "password": [
re.compile(r"password:", re.IGNORECASE), re.compile("password:", re.IGNORECASE),
re.compile(r"passphrase:", re.IGNORECASE), re.compile("passphrase:", re.IGNORECASE),
re.compile(r"enter password", re.IGNORECASE), re.compile("enter password", re.IGNORECASE),
], ],
"sudo_password": [re.compile(r"\[sudo\].*password", re.IGNORECASE)], "sudo_password": [re.compile("\\[sudo\\].*password", re.IGNORECASE)],
"apt": [ "apt": [
re.compile(r"Do you want to continue\?", re.IGNORECASE), re.compile("Do you want to continue\\?", re.IGNORECASE),
re.compile(r"After this operation", re.IGNORECASE), re.compile("After this operation", re.IGNORECASE),
re.compile(r"Need to get", re.IGNORECASE), re.compile("Need to get", re.IGNORECASE),
], ],
"vim": [ "vim": [
re.compile(r"-- INSERT --"), re.compile("-- INSERT --"),
re.compile(r"-- VISUAL --"), re.compile("-- VISUAL --"),
re.compile(r":"), re.compile(":"),
re.compile(r"Press ENTER", re.IGNORECASE), re.compile("Press ENTER", re.IGNORECASE),
], ],
"ssh": [ "ssh": [
re.compile(r"yes/no", re.IGNORECASE), re.compile("yes/no", re.IGNORECASE),
re.compile(r"password:", re.IGNORECASE), re.compile("password:", re.IGNORECASE),
re.compile(r"Permission denied", re.IGNORECASE), re.compile("Permission denied", re.IGNORECASE),
],
"git": [
re.compile(r"Username:", re.IGNORECASE),
re.compile(r"Email:", re.IGNORECASE),
], ],
"git": [re.compile("Username:", re.IGNORECASE), re.compile("Email:", re.IGNORECASE)],
"error": [ "error": [
re.compile(r"error:", re.IGNORECASE), re.compile("error:", re.IGNORECASE),
re.compile(r"failed", re.IGNORECASE), re.compile("failed", re.IGNORECASE),
re.compile(r"exception", re.IGNORECASE), re.compile("exception", re.IGNORECASE),
], ],
} }
@ -68,14 +60,7 @@ class PromptDetector:
"""Load state machines for different process types.""" """Load state machines for different process types."""
return { return {
"apt": { "apt": {
"states": [ "states": ["initial", "running", "confirming", "installing", "completed", "error"],
"initial",
"running",
"confirming",
"installing",
"completed",
"error",
],
"transitions": { "transitions": {
"initial": ["running"], "initial": ["running"],
"running": ["confirming", "installing", "completed", "error"], "running": ["confirming", "installing", "completed", "error"],
@ -87,13 +72,7 @@ class PromptDetector:
}, },
}, },
"ssh": { "ssh": {
"states": [ "states": ["initial", "connecting", "authenticating", "connected", "error"],
"initial",
"connecting",
"authenticating",
"connected",
"error",
],
"transitions": { "transitions": {
"initial": ["connecting"], "initial": ["connecting"],
"connecting": ["authenticating", "connected", "error"], "connecting": ["authenticating", "connected", "error"],
@ -103,14 +82,7 @@ class PromptDetector:
}, },
}, },
"vim": { "vim": {
"states": [ "states": ["initial", "normal", "insert", "visual", "command", "exiting"],
"initial",
"normal",
"insert",
"visual",
"command",
"exiting",
],
"transitions": { "transitions": {
"initial": ["normal", "insert"], "initial": ["normal", "insert"],
"normal": ["insert", "visual", "command", "exiting"], "normal": ["insert", "visual", "command", "exiting"],
@ -125,28 +97,22 @@ class PromptDetector:
def detect_prompt(self, output, process_type="generic"): def detect_prompt(self, output, process_type="generic"):
"""Detect what type of prompt is present in the output.""" """Detect what type of prompt is present in the output."""
detections = {} detections = {}
# Check all pattern categories
for category, patterns in self.prompt_patterns.items(): for category, patterns in self.prompt_patterns.items():
for pattern in patterns: for pattern in patterns:
if pattern.search(output): if pattern.search(output):
if category not in detections: if category not in detections:
detections[category] = [] detections[category] = []
detections[category].append(pattern.pattern) detections[category].append(pattern.pattern)
# Process type specific detection
if process_type in self.prompt_patterns: if process_type in self.prompt_patterns:
for pattern in self.prompt_patterns[process_type]: for pattern in self.prompt_patterns[process_type]:
if pattern.search(output): if pattern.search(output):
detections[process_type] = detections.get(process_type, []) detections[process_type] = detections.get(process_type, [])
detections[process_type].append(pattern.pattern) detections[process_type].append(pattern.pattern)
return detections return detections
def get_response_suggestions(self, prompt_detections, process_type="generic"): def get_response_suggestions(self, prompt_detections, process_type="generic"):
"""Get suggested responses based on detected prompts.""" """Get suggested responses based on detected prompts."""
suggestions = [] suggestions = []
for category, patterns in prompt_detections.items(): for category, patterns in prompt_detections.items():
if category == "confirmation": if category == "confirmation":
suggestions.extend(["y", "yes", "n", "no"]) suggestions.extend(["y", "yes", "n", "no"])
@ -155,22 +121,21 @@ class PromptDetector:
elif category == "sudo_password": elif category == "sudo_password":
suggestions.append("<sudo_password>") suggestions.append("<sudo_password>")
elif category == "apt": elif category == "apt":
if any("continue" in p for p in patterns): if any(("continue" in p for p in patterns)):
suggestions.extend(["y", "yes"]) suggestions.extend(["y", "yes"])
elif category == "vim": elif category == "vim":
if any(":" in p for p in patterns): if any((":" in p for p in patterns)):
suggestions.extend(["w", "q", "wq", "q!"]) suggestions.extend(["w", "q", "wq", "q!"])
elif any("ENTER" in p for p in patterns): elif any(("ENTER" in p for p in patterns)):
suggestions.append("\n") suggestions.append("\n")
elif category == "ssh": elif category == "ssh":
if any("yes/no" in p for p in patterns): if any(("yes/no" in p for p in patterns)):
suggestions.extend(["yes", "no"]) suggestions.extend(["yes", "no"])
elif any("password" in p for p in patterns): elif any(("password" in p for p in patterns)):
suggestions.append("<password>") suggestions.append("<password>")
elif category == "bash_prompt": elif category == "bash_prompt":
suggestions.extend(["help", "ls", "pwd", "exit"]) suggestions.extend(["help", "ls", "pwd", "exit"])
return list(set(suggestions))
return list(set(suggestions)) # Remove duplicates
def update_session_state(self, session_name, output, process_type="generic"): def update_session_state(self, session_name, output, process_type="generic"):
"""Update the state machine for a session based on output.""" """Update the state machine for a session based on output."""
@ -181,14 +146,10 @@ class PromptDetector:
"last_activity": time.time(), "last_activity": time.time(),
"transitions": [], "transitions": [],
} }
session_state = self.session_states[session_name] session_state = self.session_states[session_name]
old_state = session_state["current_state"] old_state = session_state["current_state"]
# Detect prompts and determine new state
detections = self.detect_prompt(output, process_type) detections = self.detect_prompt(output, process_type)
new_state = self._determine_state_from_detections(detections, process_type, old_state) new_state = self._determine_state_from_detections(detections, process_type, old_state)
if new_state != old_state: if new_state != old_state:
session_state["transitions"].append( session_state["transitions"].append(
{ {
@ -199,7 +160,6 @@ class PromptDetector:
} }
) )
session_state["current_state"] = new_state session_state["current_state"] = new_state
session_state["last_activity"] = time.time() session_state["last_activity"] = time.time()
return new_state return new_state
@ -207,8 +167,6 @@ class PromptDetector:
"""Determine new state based on prompt detections.""" """Determine new state based on prompt detections."""
if process_type in self.state_machines: if process_type in self.state_machines:
self.state_machines[process_type] self.state_machines[process_type]
# State transition logic based on detections
if "confirmation" in detections and current_state in ["running", "initial"]: if "confirmation" in detections and current_state in ["running", "initial"]:
return "confirming" return "confirming"
elif "password" in detections or "sudo_password" in detections: elif "password" in detections or "sudo_password" in detections:
@ -218,46 +176,38 @@ class PromptDetector:
elif "bash_prompt" in detections and current_state != "initial": elif "bash_prompt" in detections and current_state != "initial":
return "connected" if process_type == "ssh" else "completed" return "connected" if process_type == "ssh" else "completed"
elif "vim" in detections: elif "vim" in detections:
if any("-- INSERT --" in p for p in detections.get("vim", [])): if any(("-- INSERT --" in p for p in detections.get("vim", []))):
return "insert" return "insert"
elif any("-- VISUAL --" in p for p in detections.get("vim", [])): elif any(("-- VISUAL --" in p for p in detections.get("vim", []))):
return "visual" return "visual"
elif any(":" in p for p in detections.get("vim", [])): elif any((":" in p for p in detections.get("vim", []))):
return "command" return "command"
# Default state progression
if current_state == "initial": if current_state == "initial":
return "running" return "running"
elif current_state == "running" and detections: elif current_state == "running" and detections:
return "waiting_input" return "waiting_input"
elif current_state == "waiting_input" and not detections: elif current_state == "waiting_input" and (not detections):
return "running" return "running"
return current_state return current_state
def is_waiting_for_input(self, session_name): def is_waiting_for_input(self, session_name):
"""Check if a session is currently waiting for input.""" """Check if a session is currently waiting for input."""
if session_name not in self.session_states: if session_name not in self.session_states:
return False return False
state = self.session_states[session_name]["current_state"] state = self.session_states[session_name]["current_state"]
process_type = self.session_states[session_name]["process_type"] process_type = self.session_states[session_name]["process_type"]
# States that typically indicate waiting for input
waiting_states = { waiting_states = {
"generic": ["waiting_input"], "generic": ["waiting_input"],
"apt": ["confirming"], "apt": ["confirming"],
"ssh": ["authenticating"], "ssh": ["authenticating"],
"vim": ["command", "insert", "visual"], "vim": ["command", "insert", "visual"],
} }
return state in waiting_states.get(process_type, []) return state in waiting_states.get(process_type, [])
def get_session_timeout(self, session_name): def get_session_timeout(self, session_name):
"""Get the timeout for a session based on its process type.""" """Get the timeout for a session based on its process type."""
if session_name not in self.session_states: if session_name not in self.session_states:
return self.timeout_configs["default"] return self.timeout_configs["default"]
process_type = self.session_states[session_name]["process_type"] process_type = self.session_states[session_name]["process_type"]
return self.timeout_configs.get(process_type, self.timeout_configs["default"]) return self.timeout_configs.get(process_type, self.timeout_configs["default"])
@ -265,30 +215,26 @@ class PromptDetector:
"""Check all sessions for timeouts and return timed out sessions.""" """Check all sessions for timeouts and return timed out sessions."""
timed_out = [] timed_out = []
current_time = time.time() current_time = time.time()
for session_name, state in self.session_states.items(): for session_name, state in self.session_states.items():
timeout = self.get_session_timeout(session_name) timeout = self.get_session_timeout(session_name)
if current_time - state["last_activity"] > timeout: if current_time - state["last_activity"] > timeout:
timed_out.append(session_name) timed_out.append(session_name)
return timed_out return timed_out
def get_session_info(self, session_name): def get_session_info(self, session_name):
"""Get information about a session's state.""" """Get information about a session's state."""
if session_name not in self.session_states: if session_name not in self.session_states:
return None return None
state = self.session_states[session_name] state = self.session_states[session_name]
return { return {
"current_state": state["current_state"], "current_state": state["current_state"],
"process_type": state["process_type"], "process_type": state["process_type"],
"last_activity": state["last_activity"], "last_activity": state["last_activity"],
"transitions": state["transitions"][-5:], # Last 5 transitions "transitions": state["transitions"][-5:],
"is_waiting": self.is_waiting_for_input(session_name), "is_waiting": self.is_waiting_for_input(session_name),
} }
# Global detector instance
_detector = None _detector = None

32
rp/tools/python_exec.py Normal file
View File

@ -0,0 +1,32 @@
import contextlib
import os
import traceback
from io import StringIO
def python_exec(code, python_globals, cwd=None):
"""Execute Python code and capture the output.
Args:
code: The Python code to execute.
python_globals: Dictionary of global variables for execution.
cwd: Working directory for execution.
Returns:
Dict with status and output, or error information.
"""
try:
original_cwd = None
if cwd:
original_cwd = os.getcwd()
os.chdir(cwd)
output = StringIO()
with contextlib.redirect_stdout(output):
exec(code, python_globals)
if original_cwd:
os.chdir(original_cwd)
return {"status": "success", "output": output.getvalue()}
except Exception as e:
if original_cwd:
os.chdir(original_cwd)
return {"status": "error", "error": str(e), "traceback": traceback.format_exc()}

45
rp/tools/search.py Normal file
View File

@ -0,0 +1,45 @@
import glob
import os
from typing import List
import re
def glob_files(pattern: str, path: str = ".") -> dict:
"""
Find files matching a glob pattern.
Args:
pattern: The glob pattern to match.
path: The directory to search in.
Returns:
A dictionary with the status and a list of matching files.
"""
try:
files = glob.glob(os.path.join(path, pattern), recursive=True)
return {"status": "success", "files": files}
except Exception as e:
return {"status": "error", "error": str(e)}
def grep(pattern: str, files: List[str]) -> dict:
"""
Search for a pattern in a list of files.
Args:
pattern: The regex pattern to search for.
files: A list of files to search in.
Returns:
A dictionary with the status and a list of matching lines.
"""
try:
matches = []
for file in files:
with open(file, "r") as f:
for i, line in enumerate(f):
if re.search(pattern, line):
matches.append({"file": file, "line_number": i + 1, "line": line.strip()})
return {"status": "success", "matches": matches}
except Exception as e:
return {"status": "error", "error": str(e)}

19
rp/tools/vision.py Normal file
View File

@ -0,0 +1,19 @@
from rp.vision import post_image as vision_post_image
import functools
@functools.lru_cache()
def post_image(path: str, prompt: str = None):
"""Post an image for analysis.
Args:
path: Path to the image file.
prompt: Optional prompt for analysis.
Returns:
Analysis result.
"""
try:
return vision_post_image(path=path, prompt=prompt)
except Exception:
raise

View File

@ -5,12 +5,20 @@ import urllib.request
def http_fetch(url, headers=None): def http_fetch(url, headers=None):
"""Fetch content from an HTTP URL.
Args:
url: The URL to fetch.
headers: Optional HTTP headers.
Returns:
Dict with status and content.
"""
try: try:
req = urllib.request.Request(url) req = urllib.request.Request(url)
if headers: if headers:
for key, value in headers.items(): for key, value in headers.items():
req.add_header(key, value) req.add_header(key, value)
with urllib.request.urlopen(req) as response: with urllib.request.urlopen(req) as response:
content = response.read().decode("utf-8") content = response.read().decode("utf-8")
return {"status": "success", "content": content[:10000]} return {"status": "success", "content": content[:10000]}
@ -21,7 +29,6 @@ def http_fetch(url, headers=None):
def _perform_search(base_url, query, params=None): def _perform_search(base_url, query, params=None):
try: try:
full_url = f"https://static.molodetz.nl/search.cgi?query={query}" full_url = f"https://static.molodetz.nl/search.cgi?query={query}"
with urllib.request.urlopen(full_url) as response: with urllib.request.urlopen(full_url) as response:
content = response.read().decode("utf-8") content = response.read().decode("utf-8")
return {"status": "success", "content": json.loads(content)} return {"status": "success", "content": json.loads(content)}
@ -30,10 +37,26 @@ def _perform_search(base_url, query, params=None):
def web_search(query): def web_search(query):
"""Perform a web search.
Args:
query: Search query.
Returns:
Dict with status and search results.
"""
base_url = "https://search.molodetz.nl/search" base_url = "https://search.molodetz.nl/search"
return _perform_search(base_url, query) return _perform_search(base_url, query)
def web_search_news(query): def web_search_news(query):
"""Perform a web search for news.
Args:
query: Search query for news.
Returns:
Dict with status and news search results.
"""
base_url = "https://search.molodetz.nl/search" base_url = "https://search.molodetz.nl/search"
return _perform_search(base_url, query) return _perform_search(base_url, query)

12
rp/ui/__init__.py Normal file
View File

@ -0,0 +1,12 @@
from rp.ui.colors import Colors, Spinner
from rp.ui.display import display_tool_call, print_autonomous_header
from rp.ui.rendering import highlight_code, render_markdown
__all__ = [
"Colors",
"Spinner",
"highlight_code",
"render_markdown",
"display_tool_call",
"print_autonomous_header",
]

46
rp/ui/colors.py Normal file
View File

@ -0,0 +1,46 @@
import time
import threading
class Colors:
RESET = "\x1b[0m"
BOLD = "\x1b[1m"
RED = "\x1b[91m"
GREEN = "\x1b[92m"
YELLOW = "\x1b[93m"
BLUE = "\x1b[94m"
MAGENTA = "\x1b[95m"
CYAN = "\x1b[96m"
GRAY = "\x1b[90m"
WHITE = "\x1b[97m"
BG_BLUE = "\x1b[44m"
BG_GREEN = "\x1b[42m"
BG_RED = "\x1b[41m"
class Spinner:
def __init__(self, message="Processing...", spinner_chars="|/-\\"):
self.message = message
self.spinner_chars = spinner_chars
self.running = False
self.thread = None
def start(self):
self.running = True
self.thread = threading.Thread(target=self._spin)
self.thread.start()
def stop(self):
self.running = False
if self.thread:
self.thread.join()
print("\r" + " " * (len(self.message) + 2) + "\r", end="", flush=True)
def _spin(self):
i = 0
while self.running:
char = self.spinner_chars[i % len(self.spinner_chars)]
print(f"\r{Colors.CYAN}{char}{Colors.RESET} {self.message}", end="", flush=True)
i += 1
time.sleep(0.1)

View File

@ -1,10 +1,10 @@
import difflib import difflib
from typing import Dict, List, Optional, Tuple from typing import Dict, List, Optional, Tuple
from .colors import Colors from .colors import Colors
class DiffStats: class DiffStats:
def __init__(self): def __init__(self):
self.insertions = 0 self.insertions = 0
self.deletions = 0 self.deletions = 0
@ -20,6 +20,7 @@ class DiffStats:
class DiffLine: class DiffLine:
def __init__( def __init__(
self, self,
line_type: str, line_type: str,
@ -40,26 +41,20 @@ class DiffLine:
"header": Colors.CYAN, "header": Colors.CYAN,
"stats": Colors.BLUE, "stats": Colors.BLUE,
}.get(self.line_type, Colors.RESET) }.get(self.line_type, Colors.RESET)
prefix = {"add": "+ ", "delete": "- ", "context": " ", "header": "", "stats": ""}.get(
prefix = { self.line_type, " "
"add": "+ ", )
"delete": "- ",
"context": " ",
"header": "",
"stats": "",
}.get(self.line_type, " ")
if show_line_nums and self.line_type in ("add", "delete", "context"): if show_line_nums and self.line_type in ("add", "delete", "context"):
old_num = str(self.old_line_num) if self.old_line_num else " " old_num = str(self.old_line_num) if self.old_line_num else " "
new_num = str(self.new_line_num) if self.new_line_num else " " new_num = str(self.new_line_num) if self.new_line_num else " "
line_num_str = f"{Colors.YELLOW}{old_num:>4} {new_num:>4}{Colors.RESET} " line_num_str = f"{Colors.YELLOW}{old_num:>4} {new_num:>4}{Colors.RESET} "
else: else:
line_num_str = "" line_num_str = ""
return f"{line_num_str}{color}{prefix}{self.content}{Colors.RESET}" return f"{line_num_str}{color}{prefix}{self.content}{Colors.RESET}"
class DiffDisplay: class DiffDisplay:
def __init__(self, context_lines: int = 3): def __init__(self, context_lines: int = 3):
self.context_lines = context_lines self.context_lines = context_lines
@ -68,11 +63,9 @@ class DiffDisplay:
) -> Tuple[List[DiffLine], DiffStats]: ) -> Tuple[List[DiffLine], DiffStats]:
old_lines = old_content.splitlines(keepends=True) old_lines = old_content.splitlines(keepends=True)
new_lines = new_content.splitlines(keepends=True) new_lines = new_content.splitlines(keepends=True)
diff_lines = [] diff_lines = []
stats = DiffStats() stats = DiffStats()
stats.files_changed = 1 stats.files_changed = 1
diff = difflib.unified_diff( diff = difflib.unified_diff(
old_lines, old_lines,
new_lines, new_lines,
@ -80,10 +73,8 @@ class DiffDisplay:
tofile=f"b/{filename}", tofile=f"b/{filename}",
n=self.context_lines, n=self.context_lines,
) )
old_line_num = 0 old_line_num = 0
new_line_num = 0 new_line_num = 0
for line in diff: for line in diff:
if line.startswith("---") or line.startswith("+++"): if line.startswith("---") or line.startswith("+++"):
diff_lines.append(DiffLine("header", line.rstrip())) diff_lines.append(DiffLine("header", line.rstrip()))
@ -104,19 +95,17 @@ class DiffDisplay:
) )
old_line_num += 1 old_line_num += 1
new_line_num += 1 new_line_num += 1
stats.modifications = min(stats.insertions, stats.deletions) stats.modifications = min(stats.insertions, stats.deletions)
return (diff_lines, stats)
return diff_lines, stats
def _parse_hunk_header(self, header: str) -> Tuple[int, int]: def _parse_hunk_header(self, header: str) -> Tuple[int, int]:
try: try:
parts = header.split("@@")[1].strip().split() parts = header.split("@@")[1].strip().split()
old_start = int(parts[0].split(",")[0].replace("-", "")) old_start = int(parts[0].split(",")[0].replace("-", ""))
new_start = int(parts[1].split(",")[0].replace("+", "")) new_start = int(parts[1].split(",")[0].replace("+", ""))
return old_start, new_start return (old_start, new_start)
except (IndexError, ValueError): except (IndexError, ValueError):
return 0, 0 return (0, 0)
def render_diff( def render_diff(
self, self,
@ -126,19 +115,15 @@ class DiffDisplay:
show_stats: bool = True, show_stats: bool = True,
) -> str: ) -> str:
output = [] output = []
if show_stats: if show_stats:
output.append(f"\n{Colors.BOLD}{Colors.BLUE}{'=' * 60}{Colors.RESET}") output.append(f"\n{Colors.BOLD}{Colors.BLUE}{'=' * 60}{Colors.RESET}")
output.append(f"{Colors.BOLD}{Colors.BLUE}DIFF SUMMARY{Colors.RESET}") output.append(f"{Colors.BOLD}{Colors.BLUE}DIFF SUMMARY{Colors.RESET}")
output.append(f"{Colors.BOLD}{Colors.BLUE}{'=' * 60}{Colors.RESET}") output.append(f"{Colors.BOLD}{Colors.BLUE}{'=' * 60}{Colors.RESET}")
output.append(f"{Colors.BLUE}{stats}{Colors.RESET}\n") output.append(f"{Colors.BLUE}{stats}{Colors.RESET}\n")
for line in diff_lines: for line in diff_lines:
output.append(line.format(show_line_nums)) output.append(line.format(show_line_nums))
if show_stats: if show_stats:
output.append(f"\n{Colors.BOLD}{Colors.BLUE}{'=' * 60}{Colors.RESET}\n") output.append(f"\n{Colors.BOLD}{Colors.BLUE}{'=' * 60}{Colors.RESET}\n")
return "\n".join(output) return "\n".join(output)
def display_file_diff( def display_file_diff(
@ -149,33 +134,23 @@ class DiffDisplay:
show_line_nums: bool = True, show_line_nums: bool = True,
) -> str: ) -> str:
diff_lines, stats = self.create_diff(old_content, new_content, filename) diff_lines, stats = self.create_diff(old_content, new_content, filename)
if not diff_lines: if not diff_lines:
return f"{Colors.GRAY}No changes detected{Colors.RESET}" return f"{Colors.GRAY}No changes detected{Colors.RESET}"
return self.render_diff(diff_lines, stats, show_line_nums) return self.render_diff(diff_lines, stats, show_line_nums)
def display_side_by_side( def display_side_by_side(
self, self, old_content: str, new_content: str, filename: str = "file", width: int = 80
old_content: str,
new_content: str,
filename: str = "file",
width: int = 80,
) -> str: ) -> str:
old_lines = old_content.splitlines() old_lines = old_content.splitlines()
new_lines = new_content.splitlines() new_lines = new_content.splitlines()
matcher = difflib.SequenceMatcher(None, old_lines, new_lines) matcher = difflib.SequenceMatcher(None, old_lines, new_lines)
output = [] output = []
output.append(f"\n{Colors.BOLD}{Colors.BLUE}{'=' * width}{Colors.RESET}") output.append(f"\n{Colors.BOLD}{Colors.BLUE}{'=' * width}{Colors.RESET}")
output.append( output.append(
f"{Colors.BOLD}{Colors.BLUE}SIDE-BY-SIDE COMPARISON: {filename}{Colors.RESET}" f"{Colors.BOLD}{Colors.BLUE}SIDE-BY-SIDE COMPARISON: {filename}{Colors.RESET}"
) )
output.append(f"{Colors.BOLD}{Colors.BLUE}{'=' * width}{Colors.RESET}\n") output.append(f"{Colors.BOLD}{Colors.BLUE}{'=' * width}{Colors.RESET}\n")
half_width = (width - 5) // 2 half_width = (width - 5) // 2
for tag, i1, i2, j1, j2 in matcher.get_opcodes(): for tag, i1, i2, j1, j2 in matcher.get_opcodes():
if tag == "equal": if tag == "equal":
for i, (old_line, new_line) in enumerate(zip(old_lines[i1:i2], new_lines[j1:j2])): for i, (old_line, new_line) in enumerate(zip(old_lines[i1:i2], new_lines[j1:j2])):
@ -200,7 +175,6 @@ class DiffDisplay:
for new_line in new_lines[j1:j2]: for new_line in new_lines[j1:j2]:
new_display = new_line[:half_width].ljust(half_width) new_display = new_line[:half_width].ljust(half_width)
output.append(f"{' ' * half_width} | {Colors.GREEN}{new_display}{Colors.RESET}") output.append(f"{' ' * half_width} | {Colors.GREEN}{new_display}{Colors.RESET}")
output.append(f"\n{Colors.BOLD}{Colors.BLUE}{'=' * width}{Colors.RESET}\n") output.append(f"\n{Colors.BOLD}{Colors.BLUE}{'=' * width}{Colors.RESET}\n")
return "\n".join(output) return "\n".join(output)
@ -213,7 +187,6 @@ def display_diff(
context_lines: int = 3, context_lines: int = 3,
) -> str: ) -> str:
displayer = DiffDisplay(context_lines) displayer = DiffDisplay(context_lines)
if format_type == "side-by-side": if format_type == "side-by-side":
return displayer.display_side_by_side(old_content, new_content, filename) return displayer.display_side_by_side(old_content, new_content, filename)
else: else:
@ -223,7 +196,6 @@ def display_diff(
def get_diff_stats(old_content: str, new_content: str) -> Dict[str, int]: def get_diff_stats(old_content: str, new_content: str) -> Dict[str, int]:
displayer = DiffDisplay() displayer = DiffDisplay()
_, stats = displayer.create_diff(old_content, new_content) _, stats = displayer.create_diff(old_content, new_content)
return { return {
"insertions": stats.insertions, "insertions": stats.insertions,
"deletions": stats.deletions, "deletions": stats.deletions,

58
rp/ui/display.py Normal file
View File

@ -0,0 +1,58 @@
from rp.ui.colors import Colors
def display_tool_call(tool_name, arguments, status="running", result=None):
if status == "running":
return
args_str = ", ".join([f"{k}={str(v)[:20]}" for k, v in list(arguments.items())[:2]])
line = f"{tool_name}({args_str})"
if len(line) > 80:
line = line[:77] + "..."
print(f"{Colors.GRAY}{line}{Colors.RESET}")
def print_autonomous_header(task):
print(f"{Colors.BOLD}Task:{Colors.RESET} {task}")
print(f"{Colors.GRAY}r will work continuously until the task is complete.{Colors.RESET}")
print(f"{Colors.GRAY}Press Ctrl+C twice to interrupt.{Colors.RESET}\n")
print(f"{Colors.BOLD}{'' * 80}{Colors.RESET}\n")
def display_multiplexer_status(sessions):
"""Display the status of background sessions."""
if not sessions:
print(f"{Colors.GRAY}No background sessions running{Colors.RESET}")
return
print(f"\n{Colors.BOLD}Background Sessions:{Colors.RESET}")
print(f"{Colors.GRAY}{'' * 60}{Colors.RESET}")
for session_name, session_info in sessions.items():
status = session_info.get("status", "unknown")
pid = session_info.get("pid", "N/A")
command = session_info.get("command", "N/A")
status_color = {"running": Colors.GREEN, "stopped": Colors.RED, "error": Colors.RED}.get(
status, Colors.YELLOW
)
print(f" {Colors.CYAN}{session_name}{Colors.RESET}")
print(f" Status: {status_color}{status}{Colors.RESET}")
print(f" PID: {pid}")
print(f" Command: {command}")
if "start_time" in session_info:
import time
elapsed = time.time() - session_info["start_time"]
print(f" Running for: {elapsed:.1f}s")
print()
def display_background_event(event):
"""Display a background event."""
event.get("type", "unknown")
session_name = event.get("session_name", "unknown")
timestamp = event.get("timestamp", 0)
message = event.get("message", "")
import datetime
time_str = datetime.datetime.fromtimestamp(timestamp).strftime("%H:%M:%S")
print(
f"{Colors.GRAY}[{time_str}]{Colors.RESET} {Colors.CYAN}{session_name}{Colors.RESET}: {message}"
)

View File

@ -1,11 +1,11 @@
from datetime import datetime from datetime import datetime
from typing import Dict, List, Optional from typing import Dict, List, Optional
from .colors import Colors from .colors import Colors
from .progress import ProgressBar from .progress import ProgressBar
class EditOperation: class EditOperation:
def __init__( def __init__(
self, self,
op_type: str, op_type: str,
@ -31,38 +31,30 @@ class EditOperation:
"DELETE": Colors.RED, "DELETE": Colors.RED,
"WRITE": Colors.BLUE, "WRITE": Colors.BLUE,
} }
color = op_colors.get(self.op_type, Colors.RESET) color = op_colors.get(self.op_type, Colors.RESET)
status_icon = { status_icon = {"pending": "", "in_progress": "", "completed": "", "failed": ""}.get(
"pending": "", self.status, ""
"in_progress": "", )
"completed": "",
"failed": "",
}.get(self.status, "")
return f"{color}{status_icon} [{self.op_type}]{Colors.RESET} {self.filepath}" return f"{color}{status_icon} [{self.op_type}]{Colors.RESET} {self.filepath}"
def format_details(self, show_content: bool = True) -> str: def format_details(self, show_content: bool = True) -> str:
output = [self.format_operation()] output = [self.format_operation()]
if self.op_type in ("INSERT", "REPLACE"): if self.op_type in ("INSERT", "REPLACE"):
output.append(f" {Colors.GRAY}Position: {self.start_pos}-{self.end_pos}{Colors.RESET}") output.append(f" {Colors.GRAY}Position: {self.start_pos}-{self.end_pos}{Colors.RESET}")
if show_content: if show_content:
if self.old_content: if self.old_content:
lines = self.old_content.split("\n") lines = self.old_content.split("\n")
preview = lines[0][:60] + ("..." if len(lines[0]) > 60 or len(lines) > 1 else "") preview = lines[0][:60] + ("..." if len(lines[0]) > 60 or len(lines) > 1 else "")
output.append(f" {Colors.RED}- {preview}{Colors.RESET}") output.append(f" {Colors.RED}- {preview}{Colors.RESET}")
if self.content: if self.content:
lines = self.content.split("\n") lines = self.content.split("\n")
preview = lines[0][:60] + ("..." if len(lines[0]) > 60 or len(lines) > 1 else "") preview = lines[0][:60] + ("..." if len(lines[0]) > 60 or len(lines) > 1 else "")
output.append(f" {Colors.GREEN}+ {preview}{Colors.RESET}") output.append(f" {Colors.GREEN}+ {preview}{Colors.RESET}")
return "\n".join(output) return "\n".join(output)
class EditTracker: class EditTracker:
def __init__(self): def __init__(self):
self.operations: List[EditOperation] = [] self.operations: List[EditOperation] = []
self.current_file: Optional[str] = None self.current_file: Optional[str] = None
@ -85,10 +77,10 @@ class EditTracker:
def get_stats(self) -> Dict[str, int]: def get_stats(self) -> Dict[str, int]:
stats = { stats = {
"total": len(self.operations), "total": len(self.operations),
"completed": sum(1 for op in self.operations if op.status == "completed"), "completed": sum((1 for op in self.operations if op.status == "completed")),
"pending": sum(1 for op in self.operations if op.status == "pending"), "pending": sum((1 for op in self.operations if op.status == "pending")),
"in_progress": sum(1 for op in self.operations if op.status == "in_progress"), "in_progress": sum((1 for op in self.operations if op.status == "in_progress")),
"failed": sum(1 for op in self.operations if op.status == "failed"), "failed": sum((1 for op in self.operations if op.status == "failed")),
} }
return stats return stats
@ -96,89 +88,70 @@ class EditTracker:
if not self.operations: if not self.operations:
return 0.0 return 0.0
stats = self.get_stats() stats = self.get_stats()
return (stats["completed"] / stats["total"]) * 100 return stats["completed"] / stats["total"] * 100
def display_progress(self) -> str: def display_progress(self) -> str:
if not self.operations: if not self.operations:
return f"{Colors.GRAY}No edit operations tracked{Colors.RESET}" return f"{Colors.GRAY}No edit operations tracked{Colors.RESET}"
output = [] output = []
output.append(f"\n{Colors.BOLD}{Colors.BLUE}{'=' * 60}{Colors.RESET}") output.append(f"\n{Colors.BOLD}{Colors.BLUE}{'=' * 60}{Colors.RESET}")
output.append(f"{Colors.BOLD}{Colors.BLUE}EDIT OPERATIONS PROGRESS{Colors.RESET}") output.append(f"{Colors.BOLD}{Colors.BLUE}EDIT OPERATIONS PROGRESS{Colors.RESET}")
output.append(f"{Colors.BOLD}{Colors.BLUE}{'=' * 60}{Colors.RESET}\n") output.append(f"{Colors.BOLD}{Colors.BLUE}{'=' * 60}{Colors.RESET}\n")
stats = self.get_stats() stats = self.get_stats()
self.get_completion_percentage() self.get_completion_percentage()
progress_bar = ProgressBar(total=stats["total"], width=40) progress_bar = ProgressBar(total=stats["total"], width=40)
progress_bar.current = stats["completed"] progress_bar.current = stats["completed"]
bar_display = progress_bar._get_bar_display() bar_display = progress_bar._get_bar_display()
output.append(f"Progress: {bar_display}") output.append(f"Progress: {bar_display}")
output.append( output.append(
f"{Colors.BLUE}Total: {stats['total']}, Completed: {stats['completed']}, " f"{Colors.BLUE}Total: {stats['total']}, Completed: {stats['completed']}, Pending: {stats['pending']}, Failed: {stats['failed']}{Colors.RESET}\n"
f"Pending: {stats['pending']}, Failed: {stats['failed']}{Colors.RESET}\n"
) )
output.append(f"{Colors.BOLD}Recent Operations:{Colors.RESET}") output.append(f"{Colors.BOLD}Recent Operations:{Colors.RESET}")
for i, op in enumerate(self.operations[-5:], 1): for i, op in enumerate(self.operations[-5:], 1):
output.append(f"{i}. {op.format_operation()}") output.append(f"{i}. {op.format_operation()}")
output.append(f"\n{Colors.BOLD}{Colors.BLUE}{'=' * 60}{Colors.RESET}\n") output.append(f"\n{Colors.BOLD}{Colors.BLUE}{'=' * 60}{Colors.RESET}\n")
return "\n".join(output) return "\n".join(output)
def display_timeline(self, show_content: bool = False) -> str: def display_timeline(self, show_content: bool = False) -> str:
if not self.operations: if not self.operations:
return f"{Colors.GRAY}No edit operations tracked{Colors.RESET}" return f"{Colors.GRAY}No edit operations tracked{Colors.RESET}"
output = [] output = []
output.append(f"\n{Colors.BOLD}{Colors.BLUE}{'=' * 60}{Colors.RESET}") output.append(f"\n{Colors.BOLD}{Colors.BLUE}{'=' * 60}{Colors.RESET}")
output.append(f"{Colors.BOLD}{Colors.BLUE}EDIT TIMELINE{Colors.RESET}") output.append(f"{Colors.BOLD}{Colors.BLUE}EDIT TIMELINE{Colors.RESET}")
output.append(f"{Colors.BOLD}{Colors.BLUE}{'=' * 60}{Colors.RESET}\n") output.append(f"{Colors.BOLD}{Colors.BLUE}{'=' * 60}{Colors.RESET}\n")
for i, op in enumerate(self.operations, 1): for i, op in enumerate(self.operations, 1):
timestamp = op.timestamp.strftime("%H:%M:%S") timestamp = op.timestamp.strftime("%H:%M:%S")
output.append(f"{Colors.GRAY}[{timestamp}]{Colors.RESET} {i}.") output.append(f"{Colors.GRAY}[{timestamp}]{Colors.RESET} {i}.")
output.append(op.format_details(show_content)) output.append(op.format_details(show_content))
output.append("") output.append("")
stats = self.get_stats() stats = self.get_stats()
output.append(f"{Colors.BOLD}Summary:{Colors.RESET}") output.append(f"{Colors.BOLD}Summary:{Colors.RESET}")
output.append( output.append(
f"{Colors.BLUE}Total operations: {stats['total']}, " f"{Colors.BLUE}Total operations: {stats['total']}, Completed: {stats['completed']}, Failed: {stats['failed']}{Colors.RESET}"
f"Completed: {stats['completed']}, Failed: {stats['failed']}{Colors.RESET}"
) )
output.append(f"\n{Colors.BOLD}{Colors.BLUE}{'=' * 60}{Colors.RESET}\n") output.append(f"\n{Colors.BOLD}{Colors.BLUE}{'=' * 60}{Colors.RESET}\n")
return "\n".join(output) return "\n".join(output)
def display_summary(self) -> str: def display_summary(self) -> str:
if not self.operations: if not self.operations:
return f"{Colors.GRAY}No edits to summarize{Colors.RESET}" return f"{Colors.GRAY}No edits to summarize{Colors.RESET}"
stats = self.get_stats() stats = self.get_stats()
files_modified = len({op.filepath for op in self.operations}) files_modified = len({op.filepath for op in self.operations})
output = [] output = []
output.append(f"\n{Colors.BOLD}{Colors.GREEN}{'=' * 60}{Colors.RESET}") output.append(f"\n{Colors.BOLD}{Colors.GREEN}{'=' * 60}{Colors.RESET}")
output.append(f"{Colors.BOLD}{Colors.GREEN}EDIT SUMMARY{Colors.RESET}") output.append(f"{Colors.BOLD}{Colors.GREEN}EDIT SUMMARY{Colors.RESET}")
output.append(f"{Colors.BOLD}{Colors.GREEN}{'=' * 60}{Colors.RESET}\n") output.append(f"{Colors.BOLD}{Colors.GREEN}{'=' * 60}{Colors.RESET}\n")
output.append(f"{Colors.GREEN}Files Modified: {files_modified}{Colors.RESET}") output.append(f"{Colors.GREEN}Files Modified: {files_modified}{Colors.RESET}")
output.append(f"{Colors.GREEN}Total Operations: {stats['total']}{Colors.RESET}") output.append(f"{Colors.GREEN}Total Operations: {stats['total']}{Colors.RESET}")
output.append(f"{Colors.GREEN}Successful: {stats['completed']}{Colors.RESET}") output.append(f"{Colors.GREEN}Successful: {stats['completed']}{Colors.RESET}")
if stats["failed"] > 0: if stats["failed"] > 0:
output.append(f"{Colors.RED}Failed: {stats['failed']}{Colors.RESET}") output.append(f"{Colors.RED}Failed: {stats['failed']}{Colors.RESET}")
output.append(f"\n{Colors.BOLD}Operations by Type:{Colors.RESET}") output.append(f"\n{Colors.BOLD}Operations by Type:{Colors.RESET}")
op_types = {} op_types = {}
for op in self.operations: for op in self.operations:
op_types[op.op_type] = op_types.get(op.op_type, 0) + 1 op_types[op.op_type] = op_types.get(op.op_type, 0) + 1
for op_type, count in sorted(op_types.items()): for op_type, count in sorted(op_types.items()):
output.append(f" {op_type}: {count}") output.append(f" {op_type}: {count}")
output.append(f"\n{Colors.BOLD}{Colors.GREEN}{'=' * 60}{Colors.RESET}\n") output.append(f"\n{Colors.BOLD}{Colors.GREEN}{'=' * 60}{Colors.RESET}\n")
return "\n".join(output) return "\n".join(output)

View File

@ -13,7 +13,6 @@ class OutputFormatter:
def output(self, data: Any, message_type: str = "response"): def output(self, data: Any, message_type: str = "response"):
if self.quiet and message_type not in ["error", "result"]: if self.quiet and message_type not in ["error", "result"]:
return return
if self.format_type == "json": if self.format_type == "json":
self._output_json(data, message_type) self._output_json(data, message_type)
elif self.format_type == "structured": elif self.format_type == "structured":
@ -22,11 +21,7 @@ class OutputFormatter:
self._output_text(data, message_type) self._output_text(data, message_type)
def _output_json(self, data: Any, message_type: str): def _output_json(self, data: Any, message_type: str):
output = { output = {"type": message_type, "timestamp": datetime.now().isoformat(), "data": data}
"type": message_type,
"timestamp": datetime.now().isoformat(),
"data": data,
}
print(json.dumps(output, indent=2)) print(json.dumps(output, indent=2))
def _output_structured(self, data: Any, message_type: str): def _output_structured(self, data: Any, message_type: str):

View File

@ -36,7 +36,6 @@ class ProgressIndicator:
def _animate(self): def _animate(self):
spinner = ["", "", "", "", "", "", "", "", "", ""] spinner = ["", "", "", "", "", "", "", "", "", ""]
idx = 0 idx = 0
while self.running: while self.running:
sys.stdout.write(f"\r{spinner[idx]} {self.message}...") sys.stdout.write(f"\r{spinner[idx]} {self.message}...")
sys.stdout.flush() sys.stdout.flush()
@ -60,14 +59,11 @@ class ProgressBar:
if self.total == 0: if self.total == 0:
percent = 100 percent = 100
else: else:
percent = int((self.current / self.total) * 100) percent = int(self.current / self.total * 100)
filled = int(self.current / self.total * self.width) if self.total > 0 else self.width
filled = int((self.current / self.total) * self.width) if self.total > 0 else self.width
bar = "" * filled + "" * (self.width - filled) bar = "" * filled + "" * (self.width - filled)
sys.stdout.write(f"\r{self.description}: |{bar}| {percent}% ({self.current}/{self.total})") sys.stdout.write(f"\r{self.description}: |{bar}| {percent}% ({self.current}/{self.total})")
sys.stdout.flush() sys.stdout.flush()
if self.current >= self.total: if self.current >= self.total:
sys.stdout.write("\n") sys.stdout.write("\n")

View File

@ -1,13 +1,11 @@
import re import re
from rp.config import LANGUAGE_KEYWORDS
from pr.config import LANGUAGE_KEYWORDS from rp.ui.colors import Colors
from pr.ui.colors import Colors
def highlight_code(code, language=None, syntax_highlighting=True): def highlight_code(code, language=None, syntax_highlighting=True):
if not syntax_highlighting: if not syntax_highlighting:
return code return code
if not language: if not language:
if "def " in code or "import " in code: if "def " in code or "import " in code:
language = "python" language = "python"
@ -15,26 +13,21 @@ def highlight_code(code, language=None, syntax_highlighting=True):
language = "javascript" language = "javascript"
elif "public " in code or "class " in code: elif "public " in code or "class " in code:
language = "java" language = "java"
if language and language in LANGUAGE_KEYWORDS: if language and language in LANGUAGE_KEYWORDS:
keywords = LANGUAGE_KEYWORDS[language] keywords = LANGUAGE_KEYWORDS[language]
for keyword in keywords: for keyword in keywords:
pattern = r"\b" + re.escape(keyword) + r"\b" pattern = "\\b" + re.escape(keyword) + "\\b"
code = re.sub(pattern, f"{Colors.BLUE}{keyword}{Colors.RESET}", code) code = re.sub(pattern, f"{Colors.BLUE}{keyword}{Colors.RESET}", code)
code = re.sub('"([^"]*)"', f'{Colors.GREEN}"\\1"{Colors.RESET}', code)
code = re.sub(r'"([^"]*)"', f'{Colors.GREEN}"\\1"{Colors.RESET}', code) code = re.sub("'([^']*)'", f"{Colors.GREEN}'\\1'{Colors.RESET}", code)
code = re.sub(r"'([^']*)'", f"{Colors.GREEN}'\\1'{Colors.RESET}", code) code = re.sub("#(.*)$", f"{Colors.GRAY}#\\1{Colors.RESET}", code, flags=re.MULTILINE)
code = re.sub("//(.*)$", f"{Colors.GRAY}//\\1{Colors.RESET}", code, flags=re.MULTILINE)
code = re.sub(r"#(.*)$", f"{Colors.GRAY}#\\1{Colors.RESET}", code, flags=re.MULTILINE)
code = re.sub(r"//(.*)$", f"{Colors.GRAY}//\\1{Colors.RESET}", code, flags=re.MULTILINE)
return code return code
def render_markdown(text, syntax_highlighting=True): def render_markdown(text, syntax_highlighting=True):
if not syntax_highlighting: if not syntax_highlighting:
return text return text
code_blocks = [] code_blocks = []
def extract_code_block(match): def extract_code_block(match):
@ -46,8 +39,7 @@ def render_markdown(text, syntax_highlighting=True):
code_blocks.append(full_block) code_blocks.append(full_block)
return placeholder return placeholder
text = re.sub(r"```(\w*)\n(.*?)\n?```", extract_code_block, text, flags=re.DOTALL) text = re.sub("```(\\w*)\\n(.*?)\\n?```", extract_code_block, text, flags=re.DOTALL)
inline_codes = [] inline_codes = []
def extract_inline_code(match): def extract_inline_code(match):
@ -56,8 +48,7 @@ def render_markdown(text, syntax_highlighting=True):
inline_codes.append(f"{Colors.YELLOW}{code}{Colors.RESET}") inline_codes.append(f"{Colors.YELLOW}{code}{Colors.RESET}")
return placeholder return placeholder
text = re.sub(r"`([^`]+)`", extract_inline_code, text) text = re.sub("`([^`]+)`", extract_inline_code, text)
lines = text.split("\n") lines = text.split("\n")
processed_lines = [] processed_lines = []
for line in lines: for line in lines:
@ -69,35 +60,32 @@ def render_markdown(text, syntax_highlighting=True):
line = f"{Colors.BOLD}{Colors.MAGENTA}{line[2:]}{Colors.RESET}" line = f"{Colors.BOLD}{Colors.MAGENTA}{line[2:]}{Colors.RESET}"
elif line.startswith("> "): elif line.startswith("> "):
line = f"{Colors.CYAN}> {line[2:]}{Colors.RESET}" line = f"{Colors.CYAN}> {line[2:]}{Colors.RESET}"
elif re.match(r"^\s*[\*\-\+]\s", line): elif re.match("^\\s*[\\*\\-\\+]\\s", line):
match = re.match(r"^(\s*)([\*\-\+])(\s+.*)", line) match = re.match("^(\\s*)([\\*\\-\\+])(\\s+.*)", line)
if match: if match:
line = ( line = (
f"{match.group(1)}{Colors.YELLOW}{match.group(2)}{Colors.RESET}{match.group(3)}" f"{match.group(1)}{Colors.YELLOW}{match.group(2)}{Colors.RESET}{match.group(3)}"
) )
elif re.match(r"^\s*\d+\.\s", line): elif re.match("^\\s*\\d+\\.\\s", line):
match = re.match(r"^(\s*)(\d+\.)(\s+.*)", line) match = re.match("^(\\s*)(\\d+\\.)(\\s+.*)", line)
if match: if match:
line = ( line = (
f"{match.group(1)}{Colors.YELLOW}{match.group(2)}{Colors.RESET}{match.group(3)}" f"{match.group(1)}{Colors.YELLOW}{match.group(2)}{Colors.RESET}{match.group(3)}"
) )
processed_lines.append(line) processed_lines.append(line)
text = "\n".join(processed_lines) text = "\n".join(processed_lines)
text = re.sub( text = re.sub(
r"\[(.*?)\]\((.*?)\)", "\\[(.*?)\\]\\((.*?)\\)",
f"{Colors.BLUE}\\1{Colors.RESET}{Colors.GRAY}(\\2){Colors.RESET}", f"{Colors.BLUE}\\1{Colors.RESET}{Colors.GRAY}(\\2){Colors.RESET}",
text, text,
) )
text = re.sub(r"~~(.*?)~~", f"{Colors.GRAY}\\1{Colors.RESET}", text) text = re.sub("~~(.*?)~~", f"{Colors.GRAY}\\1{Colors.RESET}", text)
text = re.sub(r"\*\*(.*?)\*\*", f"{Colors.BOLD}\\1{Colors.RESET}", text) text = re.sub("\\*\\*(.*?)\\*\\*", f"{Colors.BOLD}\\1{Colors.RESET}", text)
text = re.sub(r"__(.*?)__", f"{Colors.BOLD}\\1{Colors.RESET}", text) text = re.sub("__(.*?)__", f"{Colors.BOLD}\\1{Colors.RESET}", text)
text = re.sub(r"\*(.*?)\*", f"{Colors.CYAN}\\1{Colors.RESET}", text) text = re.sub("\\*(.*?)\\*", f"{Colors.CYAN}\\1{Colors.RESET}", text)
text = re.sub(r"_(.*?)_", f"{Colors.CYAN}\\1{Colors.RESET}", text) text = re.sub("_(.*?)_", f"{Colors.CYAN}\\1{Colors.RESET}", text)
for i, code in enumerate(inline_codes): for i, code in enumerate(inline_codes):
text = text.replace(f"%%INLINECODE{i}%%", code) text = text.replace(f"%%INLINECODE{i}%%", code)
for i, block in enumerate(code_blocks): for i, block in enumerate(code_blocks):
text = text.replace(f"%%CODEBLOCK{i}%%", block) text = text.replace(f"%%CODEBLOCK{i}%%", block)
return text return text

42
rp/vision.py Executable file
View File

@ -0,0 +1,42 @@
import http.client
import argparse
import base64
import json
import http.client
import pathlib
DEFAULT_URL = "https://static.molodetz.nl/rp.vision.cgi"
def post_image(image_path: str, prompt: str = "", url: str = DEFAULT_URL):
image_path = str(pathlib.Path(image_path).resolve().absolute())
if not url:
url = DEFAULT_URL
url_parts = url.split("/")
host = url_parts[2]
path = "/" + "/".join(url_parts[3:])
with open(image_path, "rb") as file:
image_data = file.read()
base64_data = base64.b64encode(image_data).decode("utf-8")
payload = {"data": base64_data, "path": image_path, "prompt": prompt}
body = json.dumps(payload).encode("utf-8")
headers = {
"Content-Type": "application/json",
"Content-Length": str(len(body)),
"User-Agent": "Python http.client",
}
conn = http.client.HTTPSConnection(host)
conn.request("POST", path, body, headers)
resp = conn.getresponse()
data = resp.read()
print("Status:", resp.status, resp.reason)
print(data.decode())
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("image_path")
parser.add_argument("--prompt", default="")
parser.add_argument("--url", default=DEFAULT_URL)
args = parser.parse_args()
post_image(args.url, args.image_path, args.prompt)

View File

@ -2,10 +2,4 @@ from .workflow_definition import ExecutionMode, Workflow, WorkflowStep
from .workflow_engine import WorkflowEngine from .workflow_engine import WorkflowEngine
from .workflow_storage import WorkflowStorage from .workflow_storage import WorkflowStorage
__all__ = [ __all__ = ["Workflow", "WorkflowStep", "ExecutionMode", "WorkflowEngine", "WorkflowStorage"]
"Workflow",
"WorkflowStep",
"ExecutionMode",
"WorkflowEngine",
"WorkflowStorage",
]

View File

@ -2,11 +2,11 @@ import re
import time import time
from concurrent.futures import ThreadPoolExecutor, as_completed from concurrent.futures import ThreadPoolExecutor, as_completed
from typing import Any, Callable, Dict, List, Optional from typing import Any, Callable, Dict, List, Optional
from .workflow_definition import ExecutionMode, Workflow, WorkflowStep from .workflow_definition import ExecutionMode, Workflow, WorkflowStep
class WorkflowExecutionContext: class WorkflowExecutionContext:
def __init__(self): def __init__(self):
self.variables: Dict[str, Any] = {} self.variables: Dict[str, Any] = {}
self.step_results: Dict[str, Any] = {} self.step_results: Dict[str, Any] = {}
@ -36,6 +36,7 @@ class WorkflowExecutionContext:
class WorkflowEngine: class WorkflowEngine:
def __init__(self, tool_executor: Callable, max_workers: int = 5): def __init__(self, tool_executor: Callable, max_workers: int = 5):
self.tool_executor = tool_executor self.tool_executor = tool_executor
self.max_workers = max_workers self.max_workers = max_workers
@ -43,12 +44,8 @@ class WorkflowEngine:
def _evaluate_condition(self, condition: str, context: WorkflowExecutionContext) -> bool: def _evaluate_condition(self, condition: str, context: WorkflowExecutionContext) -> bool:
if not condition: if not condition:
return True return True
try: try:
safe_locals = { safe_locals = {"variables": context.variables, "results": context.step_results}
"variables": context.variables,
"results": context.step_results,
}
return eval(condition, {"__builtins__": {}}, safe_locals) return eval(condition, {"__builtins__": {}}, safe_locals)
except Exception: except Exception:
return False return False
@ -59,7 +56,7 @@ class WorkflowEngine:
substituted = {} substituted = {}
for key, value in arguments.items(): for key, value in arguments.items():
if isinstance(value, str): if isinstance(value, str):
pattern = r"\$\{([^}]+)\}" pattern = "\\$\\{([^}]+)\\}"
matches = re.findall(pattern, value) matches = re.findall(pattern, value)
for match in matches: for match in matches:
if match.startswith("step."): if match.startswith("step."):
@ -83,27 +80,18 @@ class WorkflowEngine:
if not self._evaluate_condition(step.condition, context): if not self._evaluate_condition(step.condition, context):
context.log_event("skipped", step.step_id, {"reason": "condition_not_met"}) context.log_event("skipped", step.step_id, {"reason": "condition_not_met"})
return {"status": "skipped", "step_id": step.step_id} return {"status": "skipped", "step_id": step.step_id}
arguments = self._substitute_variables(step.arguments, context) arguments = self._substitute_variables(step.arguments, context)
start_time = time.time() start_time = time.time()
retry_attempts = 0 retry_attempts = 0
last_error = None last_error = None
while retry_attempts <= step.retry_count: while retry_attempts <= step.retry_count:
try: try:
context.log_event( context.log_event(
"executing", "executing",
step.step_id, step.step_id,
{ {"tool": step.tool_name, "arguments": arguments, "attempt": retry_attempts + 1},
"tool": step.tool_name,
"arguments": arguments,
"attempt": retry_attempts + 1,
},
) )
result = self.tool_executor(step.tool_name, arguments) result = self.tool_executor(step.tool_name, arguments)
execution_time = time.time() - start_time execution_time = time.time() - start_time
context.set_step_result(step.step_id, result) context.set_step_result(step.step_id, result)
context.log_event( context.log_event(
@ -114,20 +102,17 @@ class WorkflowEngine:
"result_size": len(str(result)) if result else 0, "result_size": len(str(result)) if result else 0,
}, },
) )
return { return {
"status": "success", "status": "success",
"step_id": step.step_id, "step_id": step.step_id,
"result": result, "result": result,
"execution_time": execution_time, "execution_time": execution_time,
} }
except Exception as e: except Exception as e:
last_error = str(e) last_error = str(e)
retry_attempts += 1 retry_attempts += 1
if retry_attempts <= step.retry_count: if retry_attempts <= step.retry_count:
time.sleep(1 * retry_attempts) time.sleep(1 * retry_attempts)
context.log_event("failed", step.step_id, {"error": last_error}) context.log_event("failed", step.step_id, {"error": last_error})
return { return {
"status": "failed", "status": "failed",
@ -140,46 +125,37 @@ class WorkflowEngine:
self, completed_step: WorkflowStep, result: Dict[str, Any], workflow: Workflow self, completed_step: WorkflowStep, result: Dict[str, Any], workflow: Workflow
) -> List[WorkflowStep]: ) -> List[WorkflowStep]:
next_steps = [] next_steps = []
if result["status"] == "success" and completed_step.on_success: if result["status"] == "success" and completed_step.on_success:
for step_id in completed_step.on_success: for step_id in completed_step.on_success:
step = workflow.get_step(step_id) step = workflow.get_step(step_id)
if step: if step:
next_steps.append(step) next_steps.append(step)
elif result["status"] == "failed" and completed_step.on_failure: elif result["status"] == "failed" and completed_step.on_failure:
for step_id in completed_step.on_failure: for step_id in completed_step.on_failure:
step = workflow.get_step(step_id) step = workflow.get_step(step_id)
if step: if step:
next_steps.append(step) next_steps.append(step)
elif workflow.execution_mode == ExecutionMode.SEQUENTIAL: elif workflow.execution_mode == ExecutionMode.SEQUENTIAL:
current_index = workflow.steps.index(completed_step) current_index = workflow.steps.index(completed_step)
if current_index + 1 < len(workflow.steps): if current_index + 1 < len(workflow.steps):
next_steps.append(workflow.steps[current_index + 1]) next_steps.append(workflow.steps[current_index + 1])
return next_steps return next_steps
def execute_workflow( def execute_workflow(
self, workflow: Workflow, initial_variables: Optional[Dict[str, Any]] = None self, workflow: Workflow, initial_variables: Optional[Dict[str, Any]] = None
) -> WorkflowExecutionContext: ) -> WorkflowExecutionContext:
context = WorkflowExecutionContext() context = WorkflowExecutionContext()
if initial_variables: if initial_variables:
context.variables.update(initial_variables) context.variables.update(initial_variables)
if workflow.variables: if workflow.variables:
context.variables.update(workflow.variables) context.variables.update(workflow.variables)
context.log_event("workflow_started", "workflow", {"name": workflow.name}) context.log_event("workflow_started", "workflow", {"name": workflow.name})
if workflow.execution_mode == ExecutionMode.PARALLEL: if workflow.execution_mode == ExecutionMode.PARALLEL:
with ThreadPoolExecutor(max_workers=self.max_workers) as executor: with ThreadPoolExecutor(max_workers=self.max_workers) as executor:
futures = { futures = {
executor.submit(self._execute_step, step, context): step executor.submit(self._execute_step, step, context): step
for step in workflow.steps for step in workflow.steps
} }
for future in as_completed(futures): for future in as_completed(futures):
step = futures[future] step = futures[future]
try: try:
@ -187,23 +163,17 @@ class WorkflowEngine:
context.log_event("step_completed", step.step_id, result) context.log_event("step_completed", step.step_id, result)
except Exception as e: except Exception as e:
context.log_event("step_failed", step.step_id, {"error": str(e)}) context.log_event("step_failed", step.step_id, {"error": str(e)})
else: else:
pending_steps = workflow.get_initial_steps() pending_steps = workflow.get_initial_steps()
executed_step_ids = set() executed_step_ids = set()
while pending_steps: while pending_steps:
step = pending_steps.pop(0) step = pending_steps.pop(0)
if step.step_id in executed_step_ids: if step.step_id in executed_step_ids:
continue continue
result = self._execute_step(step, context) result = self._execute_step(step, context)
executed_step_ids.add(step.step_id) executed_step_ids.add(step.step_id)
next_steps = self._get_next_steps(step, result, workflow) next_steps = self._get_next_steps(step, result, workflow)
pending_steps.extend(next_steps) pending_steps.extend(next_steps)
context.log_event( context.log_event(
"workflow_completed", "workflow_completed",
"workflow", "workflow",
@ -212,5 +182,4 @@ class WorkflowEngine:
"executed_steps": list(context.step_results.keys()), "executed_steps": list(context.step_results.keys()),
}, },
) )
return context return context

View File

@ -2,11 +2,11 @@ import json
import sqlite3 import sqlite3
import time import time
from typing import List, Optional from typing import List, Optional
from .workflow_definition import Workflow from .workflow_definition import Workflow
class WorkflowStorage: class WorkflowStorage:
def __init__(self, db_path: str): def __init__(self, db_path: str):
self.db_path = db_path self.db_path = db_path
self._initialize_storage() self._initialize_storage()
@ -14,55 +14,21 @@ class WorkflowStorage:
def _initialize_storage(self): def _initialize_storage(self):
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute( cursor.execute(
""" "\n CREATE TABLE IF NOT EXISTS workflows (\n workflow_id TEXT PRIMARY KEY,\n name TEXT NOT NULL,\n description TEXT,\n workflow_data TEXT NOT NULL,\n created_at INTEGER NOT NULL,\n updated_at INTEGER NOT NULL,\n execution_count INTEGER DEFAULT 0,\n last_execution_at INTEGER,\n tags TEXT\n )\n "
CREATE TABLE IF NOT EXISTS workflows (
workflow_id TEXT PRIMARY KEY,
name TEXT NOT NULL,
description TEXT,
workflow_data TEXT NOT NULL,
created_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL,
execution_count INTEGER DEFAULT 0,
last_execution_at INTEGER,
tags TEXT
)
"""
)
cursor.execute(
"""
CREATE TABLE IF NOT EXISTS workflow_executions (
execution_id TEXT PRIMARY KEY,
workflow_id TEXT NOT NULL,
started_at INTEGER NOT NULL,
completed_at INTEGER,
status TEXT NOT NULL,
execution_log TEXT,
variables TEXT,
step_results TEXT,
FOREIGN KEY (workflow_id) REFERENCES workflows(workflow_id)
)
"""
)
cursor.execute(
"""
CREATE INDEX IF NOT EXISTS idx_workflow_name ON workflows(name)
"""
) )
cursor.execute( cursor.execute(
""" "\n CREATE TABLE IF NOT EXISTS workflow_executions (\n execution_id TEXT PRIMARY KEY,\n workflow_id TEXT NOT NULL,\n started_at INTEGER NOT NULL,\n completed_at INTEGER,\n status TEXT NOT NULL,\n execution_log TEXT,\n variables TEXT,\n step_results TEXT,\n FOREIGN KEY (workflow_id) REFERENCES workflows(workflow_id)\n )\n "
CREATE INDEX IF NOT EXISTS idx_execution_workflow ON workflow_executions(workflow_id)
"""
) )
cursor.execute( cursor.execute(
""" "\n CREATE INDEX IF NOT EXISTS idx_workflow_name ON workflows(name)\n "
CREATE INDEX IF NOT EXISTS idx_execution_started ON workflow_executions(started_at) )
""" cursor.execute(
"\n CREATE INDEX IF NOT EXISTS idx_execution_workflow ON workflow_executions(workflow_id)\n "
)
cursor.execute(
"\n CREATE INDEX IF NOT EXISTS idx_execution_started ON workflow_executions(started_at)\n "
) )
conn.commit() conn.commit()
conn.close() conn.close()
@ -71,19 +37,12 @@ class WorkflowStorage:
workflow_data = json.dumps(workflow.to_dict()) workflow_data = json.dumps(workflow.to_dict())
workflow_id = hashlib.sha256(workflow.name.encode()).hexdigest()[:16] workflow_id = hashlib.sha256(workflow.name.encode()).hexdigest()[:16]
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
current_time = int(time.time()) current_time = int(time.time())
tags_json = json.dumps(workflow.tags) tags_json = json.dumps(workflow.tags)
cursor.execute( cursor.execute(
""" "\n INSERT OR REPLACE INTO workflows\n (workflow_id, name, description, workflow_data, created_at, updated_at, tags)\n VALUES (?, ?, ?, ?, ?, ?, ?)\n ",
INSERT OR REPLACE INTO workflows
(workflow_id, name, description, workflow_data, created_at, updated_at, tags)
VALUES (?, ?, ?, ?, ?, ?, ?)
""",
( (
workflow_id, workflow_id,
workflow.name, workflow.name,
@ -94,20 +53,16 @@ class WorkflowStorage:
tags_json, tags_json,
), ),
) )
conn.commit() conn.commit()
conn.close() conn.close()
return workflow_id return workflow_id
def load_workflow(self, workflow_id: str) -> Optional[Workflow]: def load_workflow(self, workflow_id: str) -> Optional[Workflow]:
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute("SELECT workflow_data FROM workflows WHERE workflow_id = ?", (workflow_id,)) cursor.execute("SELECT workflow_data FROM workflows WHERE workflow_id = ?", (workflow_id,))
row = cursor.fetchone() row = cursor.fetchone()
conn.close() conn.close()
if row: if row:
workflow_dict = json.loads(row[0]) workflow_dict = json.loads(row[0])
return Workflow.from_dict(workflow_dict) return Workflow.from_dict(workflow_dict)
@ -116,11 +71,9 @@ class WorkflowStorage:
def load_workflow_by_name(self, name: str) -> Optional[Workflow]: def load_workflow_by_name(self, name: str) -> Optional[Workflow]:
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute("SELECT workflow_data FROM workflows WHERE name = ?", (name,)) cursor.execute("SELECT workflow_data FROM workflows WHERE name = ?", (name,))
row = cursor.fetchone() row = cursor.fetchone()
conn.close() conn.close()
if row: if row:
workflow_dict = json.loads(row[0]) workflow_dict = json.loads(row[0])
return Workflow.from_dict(workflow_dict) return Workflow.from_dict(workflow_dict)
@ -129,26 +82,15 @@ class WorkflowStorage:
def list_workflows(self, tag: Optional[str] = None) -> List[dict]: def list_workflows(self, tag: Optional[str] = None) -> List[dict]:
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
if tag: if tag:
cursor.execute( cursor.execute(
""" "\n SELECT workflow_id, name, description, execution_count, last_execution_at, tags\n FROM workflows\n WHERE tags LIKE ?\n ORDER BY name\n ",
SELECT workflow_id, name, description, execution_count, last_execution_at, tags
FROM workflows
WHERE tags LIKE ?
ORDER BY name
""",
(f'%"{tag}"%',), (f'%"{tag}"%',),
) )
else: else:
cursor.execute( cursor.execute(
""" "\n SELECT workflow_id, name, description, execution_count, last_execution_at, tags\n FROM workflows\n ORDER BY name\n "
SELECT workflow_id, name, description, execution_count, last_execution_at, tags
FROM workflows
ORDER BY name
"""
) )
workflows = [] workflows = []
for row in cursor.fetchall(): for row in cursor.fetchall():
workflows.append( workflows.append(
@ -161,22 +103,17 @@ class WorkflowStorage:
"tags": json.loads(row[5]) if row[5] else [], "tags": json.loads(row[5]) if row[5] else [],
} }
) )
conn.close() conn.close()
return workflows return workflows
def delete_workflow(self, workflow_id: str) -> bool: def delete_workflow(self, workflow_id: str) -> bool:
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute("DELETE FROM workflows WHERE workflow_id = ?", (workflow_id,)) cursor.execute("DELETE FROM workflows WHERE workflow_id = ?", (workflow_id,))
deleted = cursor.rowcount > 0 deleted = cursor.rowcount > 0
cursor.execute("DELETE FROM workflow_executions WHERE workflow_id = ?", (workflow_id,)) cursor.execute("DELETE FROM workflow_executions WHERE workflow_id = ?", (workflow_id,))
conn.commit() conn.commit()
conn.close() conn.close()
return deleted return deleted
def save_execution( def save_execution(
@ -185,23 +122,16 @@ class WorkflowStorage:
import uuid import uuid
execution_id = str(uuid.uuid4())[:16] execution_id = str(uuid.uuid4())[:16]
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
started_at = ( started_at = (
int(execution_context.execution_log[0]["timestamp"]) int(execution_context.execution_log[0]["timestamp"])
if execution_context.execution_log if execution_context.execution_log
else int(time.time()) else int(time.time())
) )
completed_at = int(time.time()) completed_at = int(time.time())
cursor.execute( cursor.execute(
""" "\n INSERT INTO workflow_executions\n (execution_id, workflow_id, started_at, completed_at, status, execution_log, variables, step_results)\n VALUES (?, ?, ?, ?, ?, ?, ?, ?)\n ",
INSERT INTO workflow_executions
(execution_id, workflow_id, started_at, completed_at, status, execution_log, variables, step_results)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
""",
( (
execution_id, execution_id,
workflow_id, workflow_id,
@ -213,37 +143,21 @@ class WorkflowStorage:
json.dumps(execution_context.step_results), json.dumps(execution_context.step_results),
), ),
) )
cursor.execute( cursor.execute(
""" "\n UPDATE workflows\n SET execution_count = execution_count + 1,\n last_execution_at = ?\n WHERE workflow_id = ?\n ",
UPDATE workflows
SET execution_count = execution_count + 1,
last_execution_at = ?
WHERE workflow_id = ?
""",
(completed_at, workflow_id), (completed_at, workflow_id),
) )
conn.commit() conn.commit()
conn.close() conn.close()
return execution_id return execution_id
def get_execution_history(self, workflow_id: str, limit: int = 10) -> List[dict]: def get_execution_history(self, workflow_id: str, limit: int = 10) -> List[dict]:
conn = sqlite3.connect(self.db_path, check_same_thread=False) conn = sqlite3.connect(self.db_path, check_same_thread=False)
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute( cursor.execute(
""" "\n SELECT execution_id, started_at, completed_at, status\n FROM workflow_executions\n WHERE workflow_id = ?\n ORDER BY started_at DESC\n LIMIT ?\n ",
SELECT execution_id, started_at, completed_at, status
FROM workflow_executions
WHERE workflow_id = ?
ORDER BY started_at DESC
LIMIT ?
""",
(workflow_id, limit), (workflow_id, limit),
) )
executions = [] executions = []
for row in cursor.fetchall(): for row in cursor.fetchall():
executions.append( executions.append(
@ -254,6 +168,5 @@ class WorkflowStorage:
"status": row[3], "status": row[3],
} }
) )
conn.close() conn.close()
return executions return executions

View File

@ -1,101 +1,97 @@
from pr.core.advanced_context import AdvancedContextManager from rp.core.advanced_context import AdvancedContextManager
def test_adaptive_context_window_simple(): class TestAdvancedContextManager:
mgr = AdvancedContextManager() def setup_method(self):
messages = [ self.manager = AdvancedContextManager()
{"content": "short"},
{"content": "this is a longer message with more words"},
]
window = mgr.adaptive_context_window(messages, "simple")
assert isinstance(window, int)
assert window >= 10
def test_init(self):
manager = AdvancedContextManager(knowledge_store="test", conversation_memory="test")
assert manager.knowledge_store == "test"
assert manager.conversation_memory == "test"
def test_adaptive_context_window_medium(): def test_adaptive_context_window_simple(self):
mgr = AdvancedContextManager() messages = [{"content": "short message"}]
messages = [ result = self.manager.adaptive_context_window(messages, "simple")
{"content": "short"}, assert result >= 10
{"content": "this is a longer message with more words"},
]
window = mgr.adaptive_context_window(messages, "medium")
assert isinstance(window, int)
assert window >= 20
def test_adaptive_context_window_medium(self):
messages = [{"content": "medium length message with some content"}]
result = self.manager.adaptive_context_window(messages, "medium")
assert result >= 20
def test_adaptive_context_window_complex(): def test_adaptive_context_window_complex(self):
mgr = AdvancedContextManager() messages = [
messages = [ {
{"content": "short"}, "content": "very long and complex message with many words and detailed information about various topics"
{"content": "this is a longer message with more words"}, }
] ]
window = mgr.adaptive_context_window(messages, "complex") result = self.manager.adaptive_context_window(messages, "complex")
assert isinstance(window, int) assert result >= 35
assert window >= 35
def test_adaptive_context_window_very_complex(self):
messages = [
{
"content": "extremely long and very complex message with extensive vocabulary and detailed explanations"
}
]
result = self.manager.adaptive_context_window(messages, "very_complex")
assert result >= 50
def test_analyze_message_complexity(): def test_adaptive_context_window_unknown_complexity(self):
mgr = AdvancedContextManager() messages = [{"content": "test"}]
messages = [{"content": "hello world"}, {"content": "hello again"}] result = self.manager.adaptive_context_window(messages, "unknown")
score = mgr._analyze_message_complexity(messages) assert result >= 20
assert 0 <= score <= 1
def test_analyze_message_complexity(self):
messages = [{"content": "This is a test message with some words."}]
result = self.manager._analyze_message_complexity(messages)
assert 0.0 <= result <= 1.0
def test_analyze_message_complexity_empty(): def test_analyze_message_complexity_empty(self):
mgr = AdvancedContextManager() messages = []
messages = [] result = self.manager._analyze_message_complexity(messages)
score = mgr._analyze_message_complexity(messages) assert result == 0.0
assert score == 0
def test_extract_key_sentences(self):
text = "First sentence. Second sentence is longer and more detailed. Third sentence."
result = self.manager.extract_key_sentences(text, top_k=2)
assert len(result) <= 2
assert all(isinstance(s, str) for s in result)
def test_extract_key_sentences(): def test_extract_key_sentences_empty(self):
mgr = AdvancedContextManager() text = ""
text = "This is the first sentence. This is the second sentence. This is a longer third sentence with more words." result = self.manager.extract_key_sentences(text)
sentences = mgr.extract_key_sentences(text, 2) assert result == []
assert len(sentences) <= 2
assert all(isinstance(s, str) for s in sentences)
def test_advanced_summarize_messages(self):
messages = [
{"content": "First message with important information."},
{"content": "Second message with more details."},
]
result = self.manager.advanced_summarize_messages(messages)
assert isinstance(result, str)
assert len(result) > 0
def test_extract_key_sentences_empty(): def test_advanced_summarize_messages_empty(self):
mgr = AdvancedContextManager() messages = []
text = "" result = self.manager.advanced_summarize_messages(messages)
sentences = mgr.extract_key_sentences(text, 5) assert result == "No content to summarize."
assert sentences == []
def test_score_message_relevance(self):
message = {"content": "test message"}
context = "test context"
result = self.manager.score_message_relevance(message, context)
assert 0.0 <= result <= 1.0
def test_advanced_summarize_messages(): def test_score_message_relevance_no_overlap(self):
mgr = AdvancedContextManager() message = {"content": "apple banana"}
messages = [{"content": "Hello"}, {"content": "How are you?"}] context = "orange grape"
summary = mgr.advanced_summarize_messages(messages) result = self.manager.score_message_relevance(message, context)
assert isinstance(summary, str) assert result == 0.0
def test_score_message_relevance_empty(self):
def test_advanced_summarize_messages_empty(): message = {"content": ""}
mgr = AdvancedContextManager() context = ""
messages = [] result = self.manager.score_message_relevance(message, context)
summary = mgr.advanced_summarize_messages(messages) assert result == 0.0
assert summary == "No content to summarize."
def test_score_message_relevance():
mgr = AdvancedContextManager()
message = {"content": "hello world"}
context = "world hello"
score = mgr.score_message_relevance(message, context)
assert 0 <= score <= 1
def test_score_message_relevance_no_overlap():
mgr = AdvancedContextManager()
message = {"content": "hello"}
context = "world"
score = mgr.score_message_relevance(message, context)
assert score == 0
def test_score_message_relevance_empty():
mgr = AdvancedContextManager()
message = {"content": ""}
context = ""
score = mgr.score_message_relevance(message, context)
assert score == 0

View File

@ -1,10 +1,10 @@
from pr.agents.agent_communication import ( from rp.agents.agent_communication import (
AgentCommunicationBus, AgentCommunicationBus,
AgentMessage, AgentMessage,
MessageType, MessageType,
) )
from pr.agents.agent_manager import AgentInstance, AgentManager from rp.agents.agent_manager import AgentInstance, AgentManager
from pr.agents.agent_roles import AgentRole, get_agent_role, list_agent_roles from rp.agents.agent_roles import AgentRole, get_agent_role, list_agent_roles
def test_get_agent_role(): def test_get_agent_role():
@ -82,7 +82,10 @@ def test_agent_manager_get_agent_messages():
def test_agent_manager_get_session_summary(): def test_agent_manager_get_session_summary():
mgr = AgentManager(":memory:", None) mgr = AgentManager(":memory:", None)
summary = mgr.get_session_summary() summary = mgr.get_session_summary()
assert isinstance(summary, str) assert isinstance(summary, dict)
assert "session_id" in summary
assert "active_agents" in summary
assert "agents" in summary
def test_agent_manager_collaborate_agents(): def test_agent_manager_collaborate_agents():

View File

@ -1,62 +1,64 @@
import unittest import unittest
import urllib.error import json
from unittest.mock import MagicMock, patch from unittest.mock import patch
from pr.core.api import call_api, list_models from rp.core.api import call_api, list_models
class TestApi(unittest.TestCase): class TestApi(unittest.TestCase):
@patch("pr.core.api.urllib.request.urlopen") @patch("rp.core.http_client.http_client.post")
@patch("pr.core.api.auto_slim_messages") @patch("rp.core.api.auto_slim_messages")
def test_call_api_success(self, mock_slim, mock_urlopen): def test_call_api_success(self, mock_slim, mock_post):
mock_slim.return_value = [{"role": "user", "content": "test"}] mock_slim.return_value = [{"role": "user", "content": "test"}]
mock_response = MagicMock() mock_post.return_value = {
mock_response.read.return_value = ( "status": 200,
b'{"choices": [{"message": {"content": "response"}}], "usage": {"tokens": 10}}' "text": '{"choices": [{"message": {"content": "response"}}], "usage": {"tokens": 10}}',
) "json": lambda: json.loads(
mock_urlopen.return_value.__enter__.return_value = mock_response '{"choices": [{"message": {"content": "response"}}], "usage": {"tokens": 10}}'
),
}
result = call_api([], "model", "http://url", "key", True, [{"name": "tool"}]) result = call_api([], "model", "http://url", "key", True, [{"name": "tool"}])
self.assertIn("choices", result) self.assertIn("choices", result)
mock_urlopen.assert_called_once() mock_post.assert_called_once()
@patch("urllib.request.urlopen") @patch("rp.core.http_client.http_client.post")
@patch("pr.core.api.auto_slim_messages") @patch("rp.core.api.auto_slim_messages")
def test_call_api_http_error(self, mock_slim, mock_urlopen): def test_call_api_http_error(self, mock_slim, mock_post):
mock_slim.return_value = [{"role": "user", "content": "test"}] mock_slim.return_value = [{"role": "user", "content": "test"}]
mock_urlopen.side_effect = urllib.error.HTTPError( mock_post.return_value = {"error": True, "status": 500, "text": "error"}
"http://url", 500, "error", None, MagicMock()
)
result = call_api([], "model", "http://url", "key", False, []) result = call_api([], "model", "http://url", "key", False, [])
self.assertIn("error", result) self.assertIn("error", result)
@patch("urllib.request.urlopen") @patch("rp.core.http_client.http_client.post")
@patch("pr.core.api.auto_slim_messages") @patch("rp.core.api.auto_slim_messages")
def test_call_api_general_error(self, mock_slim, mock_urlopen): def test_call_api_general_error(self, mock_slim, mock_post):
mock_slim.return_value = [{"role": "user", "content": "test"}] mock_slim.return_value = [{"role": "user", "content": "test"}]
mock_urlopen.side_effect = Exception("test error") mock_post.return_value = {"error": True, "exception": "test error"}
result = call_api([], "model", "http://url", "key", False, []) result = call_api([], "model", "http://url", "key", False, [])
self.assertIn("error", result) self.assertIn("error", result)
@patch("urllib.request.urlopen") @patch("rp.core.http_client.http_client.get")
def test_list_models_success(self, mock_urlopen): def test_list_models_success(self, mock_get):
mock_response = MagicMock() mock_get.return_value = {
mock_response.read.return_value = b'{"data": [{"id": "model1"}]}' "status": 200,
mock_urlopen.return_value.__enter__.return_value = mock_response "text": '{"data": [{"id": "model1"}]}',
"json": lambda: json.loads('{"data": [{"id": "model1"}]}'),
}
result = list_models("http://url", "key") result = list_models("http://url", "key")
self.assertEqual(result, [{"id": "model1"}]) self.assertEqual(result, [{"id": "model1"}])
@patch("urllib.request.urlopen") @patch("rp.core.http_client.http_client.get")
def test_list_models_error(self, mock_urlopen): def test_list_models_error(self, mock_get):
mock_urlopen.side_effect = Exception("error") mock_get.return_value = {"error": True, "exception": "error"}
result = list_models("http://url", "key") result = list_models("http://url", "key")

Some files were not shown because too many files have changed in this diff Show More