# Container Management API Client Manual ## Table of Contents 1. [Installation](#installation) 2. [Quick Start](#quick-start) 3. [Client Initialization](#client-initialization) 4. [Container Operations](#container-operations) 5. [Container Lifecycle](#container-lifecycle) 6. [Port Management](#port-management) 7. [File Operations](#file-operations) 8. [Interactive Terminal](#interactive-terminal) 9. [Command Execution](#command-execution) 10. [Batch Operations](#batch-operations) 11. [Utility Methods](#utility-methods) 12. [Complete Examples](#complete-examples) ## Installation ```bash pip install aiohttp ``` Save the client as `container_client.py` and import it: ```python from container_client import ContainerClient, ContainerConfig, ContainerInfo, ContainerStatus ``` ## Quick Start ```python import asyncio from container_client import ContainerClient async def main(): async with ContainerClient("http://localhost:8080", "admin", "password") as client: # Create a container container = await client.create_container( image="python:3.12-slim", env={"APP_ENV": "production"}, tags=["web", "api"] ) print(f"Created container: {container.cuid}") # Start the container await client.start_container(container.cuid) # Execute a command stdout, stderr, exit_code = await client.execute_command( container.cuid, "python --version" ) print(f"Python version: {stdout}") # Clean up await client.stop_container(container.cuid) await client.delete_container(container.cuid) asyncio.run(main()) ``` ## Client Initialization ### Using Context Manager (Recommended) ```python async with ContainerClient("http://localhost:8080", "admin", "password") as client: # Client is automatically connected and will be closed after use health = await client.health_check() print(f"API Status: {health.status}") ``` ### Manual Connection Management ```python client = ContainerClient("http://localhost:8080", "admin", "password") await client.connect() try: # Use client health = await client.health_check() finally: await client.close() ``` ## Container Operations ### Create Container ```python # Basic creation container = await client.create_container( image="nginx:alpine" ) # Full configuration container = await client.create_container( image="python:3.12-slim", env={ "APP_ENV": "production", "DEBUG": "false", "DATABASE_URL": "postgresql://..." }, tags=["web", "production", "v2.0"], resources={ "cpus": 2.0, # CPU cores "memory": "4096m", # Memory limit "pids": 2048 # PID limit }, ports=[ {"host": 8080, "container": 80, "protocol": "tcp"}, {"host": 8443, "container": 443, "protocol": "tcp"} ] ) print(f"Container ID: {container.cuid}") print(f"Status: {container.status}") print(f"Image: {container.image}") ``` ### List Containers ```python # List all containers (auto-pagination) all_containers = await client.list_all_containers() # List with filtering running_containers = await client.list_all_containers( status=["running", "paused"] ) # Manual pagination containers, next_cursor = await client.list_containers(limit=10) print(f"Found {len(containers)} containers") if next_cursor: # Get next page more_containers, next_cursor = await client.list_containers( cursor=next_cursor, limit=10 ) ``` ### Get Container Details ```python container = await client.get_container("c123e4567-e89b-12d3-a456-426614174000") print(f"Container: {container.cuid}") print(f"Image: {container.image}") print(f"Status: {container.status}") print(f"Environment: {container.env}") print(f"Tags: {container.tags}") print(f"Resources: {container.resources}") print(f"Ports: {container.ports}") ``` ### Update Container ```python # Update environment variables updated = await client.update_container( container.cuid, env={ "NEW_VAR": "value", # Add new variable "EXISTING_VAR": "new", # Update existing "OLD_VAR": None # Remove variable } ) # Replace tags updated = await client.update_container( container.cuid, tags=["updated", "v2.1"] ) # Update resources (merged with existing) updated = await client.update_container( container.cuid, resources={ "memory": "8192m" # Only update memory, keep other limits } ) # Change image updated = await client.update_container( container.cuid, image="python:3.13-slim" ) ``` ### Delete Container ```python # Delete container and its mount directory await client.delete_container(container.cuid) # Batch delete cuids = ["cuid1", "cuid2", "cuid3"] errors = await client.batch_delete(cuids) for cuid, error in zip(cuids, errors): if error: print(f"Failed to delete {cuid}: {error}") else: print(f"Deleted {cuid}") ``` ## Container Lifecycle ### Start Container ```python result = await client.start_container(container.cuid) print(result) # {"status": "started"} # Wait for container to be running if await client.wait_for_status(container.cuid, "running", timeout=30): print("Container is running") else: print("Container failed to start") ``` ### Stop Container ```python result = await client.stop_container(container.cuid) print(result) # {"status": "stopped"} ``` ### Pause/Unpause Container ```python # Pause await client.pause_container(container.cuid) # Do something while paused... # Unpause await client.unpause_container(container.cuid) ``` ### Restart Container ```python result = await client.restart_container(container.cuid) print(result) # {"status": "restarted"} ``` ## Port Management ### Update Ports ```python # Add or update port mappings result = await client.update_ports( container.cuid, ports=[ {"host": 3000, "container": 3000, "protocol": "tcp"}, {"host": 9000, "container": 9000, "protocol": "udp"}, {"host": 8080, "container": 80, "protocol": "tcp"} ] ) # Remove all ports result = await client.update_ports(container.cuid, ports=[]) ``` ## File Operations ### Upload Files ```python # Method 1: Upload from dictionary files = { "boot.py": b""" import os print(f"Container {os.environ.get('CONTAINER_UID')} started") while True: print("Working...") time.sleep(10) """, "requirements.txt": b"requests==2.31.0\nnumpy==1.24.0", "data/config.json": b'{"setting": "value"}' } await client.upload_zip(container.cuid, files=files) # Method 2: Upload existing ZIP file await client.upload_zip( container.cuid, zip_path="/path/to/archive.zip" ) # Method 3: Upload raw ZIP data with open("archive.zip", "rb") as f: zip_data = f.read() await client.upload_zip(container.cuid, zip_data=zip_data) ``` ### Download Files ```python # Download single file content = await client.download_file( container.cuid, path="data/output.json" ) print(content.decode('utf-8')) # Download and save file await client.download_file( container.cuid, path="logs/app.log", save_to="./downloads/app.log" ) # Download directory as ZIP zip_data = await client.download_zip( container.cuid, path="data" # Download data/ directory ) # Download entire container mount as ZIP zip_data = await client.download_zip( container.cuid, path="" # Empty path for root ) # Download and extract await client.download_zip( container.cuid, path="", extract_to="./container_backup/" ) ``` ## Interactive Terminal ### Enter Container CLI (Full Terminal Experience) ```python # Enter interactive terminal (blocks until Ctrl+] is pressed) await client.enter_container(container.cuid) # User can now interact with the container terminal like SSH # - Type commands normally # - Ctrl+C sends interrupt to container # - Ctrl+] exits the terminal session ``` ### Custom Terminal Session ```python # Create terminal with custom size terminal = await client.create_terminal( container.cuid, cols=120, # Terminal width rows=40 # Terminal height ) # Start interactive mode await terminal.start_interactive() try: # Wait for session to complete await terminal.wait() finally: # Clean up await terminal.stop() ``` ### Programmatic Terminal Control ```python terminal = await client.create_terminal(container.cuid) # Send commands programmatically await terminal.send_input("ls -la\n") await terminal.send_input("pwd\n") # Send special signals await terminal.send_interrupt() # Ctrl+C await terminal.send_terminate() # SIGTERM await terminal.send_kill() # SIGKILL # Resize terminal await terminal.resize(cols=100, rows=30) # Send binary data await terminal.send_input_bytes(b"\x03") # Ctrl+C as bytes # Clean up when done await terminal.stop() ``` ## Command Execution ### Execute and Get Output ```python # Simple command execution stdout, stderr, exit_code = await client.execute_command( container.cuid, "ls -la /app" ) print(f"Output: {stdout}") print(f"Errors: {stderr}") print(f"Exit Code: {exit_code}") # With custom timeout stdout, stderr, exit_code = await client.execute_command( container.cuid, "python long_running_script.py", timeout=300.0 # 5 minutes ) # Multi-line script script = """ cd /app python -m pip install -r requirements.txt python main.py --init """ stdout, stderr, exit_code = await client.execute_command( container.cuid, script ) ``` ### Stream Output in Real-Time ```python # Define callbacks for output def on_stdout(data): print(f"[OUT] {data}", end="") def on_stderr(data): print(f"[ERR] {data}", end="") # Stream command output exit_code = await client.stream_output( container.cuid, "python train_model.py", on_stdout=on_stdout, on_stderr=on_stderr ) print(f"\nCommand finished with exit code: {exit_code}") ``` ### Upload and Run ```python # Upload files and execute command in one operation files = { "script.py": b""" import sys print("Hello from uploaded script!") print(f"Arguments: {sys.argv[1:]}") """, "data.txt": b"sample data" } exit_code = await client.upload_and_run( container.cuid, files=files, command="python script.py arg1 arg2", wait_for_completion=True ) print(f"Script exit code: {exit_code}") # Start long-running process without waiting await client.upload_and_run( container.cuid, files={"server.py": server_code}, command="python server.py", wait_for_completion=False ) ``` ## Batch Operations ### Create Multiple Containers ```python from container_client import ContainerConfig # Define container configurations configs = [ ContainerConfig( image="nginx:alpine", env={"NGINX_PORT": "80"}, tags=["web", "frontend"], ports=[{"host": 8080, "container": 80, "protocol": "tcp"}] ), ContainerConfig( image="redis:alpine", env={"REDIS_PASSWORD": "secret"}, tags=["cache", "backend"] ), ContainerConfig( image="postgres:15", env={"POSTGRES_PASSWORD": "dbpass"}, tags=["database", "backend"], resources={"memory": "4096m", "cpus": 2.0} ) ] # Create all containers in parallel containers = await client.batch_create(configs) for container in containers: print(f"Created: {container.cuid} ({container.image})") ``` ### Execute Commands on Multiple Containers ```python # Define commands for each container commands = { "c123e4567-e89b-12d3-a456-426614174000": "nginx -v", "c223e4567-e89b-12d3-a456-426614174001": "redis-cli ping", "c323e4567-e89b-12d3-a456-426614174002": "psql --version" } # Execute all commands in parallel results = await client.batch_execute(commands, timeout=10.0) for cuid, (stdout, stderr, exit_code) in results.items(): print(f"\nContainer {cuid}:") print(f" Output: {stdout.strip()}") if stderr: print(f" Errors: {stderr.strip()}") print(f" Exit Code: {exit_code}") ``` ## Utility Methods ### Get Container Status ```python status = await client.get_status(container.cuid) print(f"Container: {status.cuid}") print(f"Status: {status.status}") print(f"Created: {status.created_at}") print(f"Uptime: {status.uptime}") print(f"Restarts: {status.restarts}") if status.last_error: print(f"Last Error: {status.last_error}") ``` ### Health Check ```python health = await client.health_check() print(f"API Status: {health.status}") print(f"Compose Version: {health.compose_version}") print(f"Uptime: {health.uptime_s} seconds") ``` ### Wait for Status ```python # Wait for container to be running if await client.wait_for_status( container.cuid, target_status="running", timeout=60.0, poll_interval=2.0 ): print("Container is running") else: print("Timeout waiting for container") # Wait for container to stop if await client.wait_for_status(container.cuid, "exited"): print("Container has stopped") ``` ## Complete Examples ### Example 1: Web Application Deployment ```python import asyncio from container_client import ContainerClient async def deploy_web_app(): async with ContainerClient("http://localhost:8080", "admin", "password") as client: # Create container container = await client.create_container( image="python:3.12-slim", env={ "FLASK_APP": "app.py", "FLASK_ENV": "production" }, ports=[{"host": 5000, "container": 5000, "protocol": "tcp"}], resources={"memory": "1024m", "cpus": 1.0} ) print(f"Created container: {container.cuid}") # Upload application files app_files = { "app.py": b""" from flask import Flask app = Flask(__name__) @app.route('/') def hello(): return 'Hello from containerized Flask!' if __name__ == '__main__': app.run(host='0.0.0.0', port=5000) """, "requirements.txt": b"flask==3.0.0", "boot.py": b""" import subprocess import sys # Install requirements subprocess.check_call([sys.executable, "-m", "pip", "install", "-r", "requirements.txt"]) # Start Flask app subprocess.call([sys.executable, "app.py"]) """ } await client.upload_zip(container.cuid, files=app_files) # Start container await client.start_container(container.cuid) # Wait for app to be ready if await client.wait_for_status(container.cuid, "running", timeout=30): print("Web app is running on http://localhost:5000") # Check logs stdout, stderr, _ = await client.execute_command( container.cuid, "curl -s http://localhost:5000" ) print(f"App response: {stdout}") return container.cuid asyncio.run(deploy_web_app()) ``` ### Example 2: Data Processing Pipeline ```python import asyncio from container_client import ContainerClient async def run_data_pipeline(): async with ContainerClient("http://localhost:8080", "admin", "password") as client: # Create data processor container processor = await client.create_container( image="python:3.12-slim", env={"PYTHONUNBUFFERED": "1"}, resources={"memory": "4096m", "cpus": 2.0} ) # Upload data and processing script files = { "boot.py": b""" import json import sys # Process data with open('input.json', 'r') as f: data = json.load(f) # Transform data processed = { 'total': len(data), 'items': [{'id': i, 'value': item * 2} for i, item in enumerate(data)] } # Save results with open('output.json', 'w') as f: json.dump(processed, f, indent=2) print(f"Processed {len(data)} items") """, "input.json": b"[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]" } await client.upload_zip(processor.cuid, files=files) # Start container and run processing await client.start_container(processor.cuid) # Execute processing with streaming output def print_output(data): print(data, end="") exit_code = await client.stream_output( processor.cuid, "python /app/boot.py", on_stdout=print_output, on_stderr=print_output ) if exit_code == 0: # Download results output = await client.download_file( processor.cuid, "output.json" ) import json results = json.loads(output) print(f"\nResults: {results}") # Cleanup await client.stop_container(processor.cuid) await client.delete_container(processor.cuid) asyncio.run(run_data_pipeline()) ``` ### Example 3: Interactive Development Environment ```python import asyncio from container_client import ContainerClient async def create_dev_environment(): async with ContainerClient("http://localhost:8080", "admin", "password") as client: # Create development container dev_container = await client.create_container( image="python:3.12", env={ "PYTHONPATH": "/app", "DEVELOPMENT": "true" }, tags=["development", "interactive"], resources={"memory": "8192m", "cpus": 4.0}, ports=[ {"host": 8888, "container": 8888, "protocol": "tcp"}, # Jupyter {"host": 5000, "container": 5000, "protocol": "tcp"} # Dev server ] ) # Setup development tools setup_script = """ pip install jupyter ipython black flake8 pytest pip install pandas numpy matplotlib mkdir -p /app/notebooks /app/src /app/tests echo 'Development environment ready!' > /app/README.md """ await client.upload_zip( dev_container.cuid, files={"setup.sh": setup_script.encode()} ) # Start container await client.start_container(dev_container.cuid) # Run setup print("Setting up development environment...") stdout, stderr, exit_code = await client.execute_command( dev_container.cuid, "bash /app/setup.sh", timeout=300 ) if exit_code == 0: print("Development environment ready!") print(f"Container ID: {dev_container.cuid}") # Start Jupyter in background await client.upload_and_run( dev_container.cuid, files={"start_jupyter.sh": b"jupyter notebook --ip=0.0.0.0 --port=8888 --no-browser --allow-root"}, command="bash /app/start_jupyter.sh &", wait_for_completion=False ) print("\nYou can now:") print("1. Enter the container terminal:") print(f" await client.enter_container('{dev_container.cuid}')") print("2. Access Jupyter at http://localhost:8888") # Enter interactive terminal await client.enter_container(dev_container.cuid) return dev_container.cuid asyncio.run(create_dev_environment()) ``` ### Example 4: Multi-Container Application ```python import asyncio from container_client import ContainerClient, ContainerConfig async def deploy_stack(): async with ContainerClient("http://localhost:8080", "admin", "password") as client: # Define application stack stack = { "frontend": ContainerConfig( image="nginx:alpine", tags=["frontend", "web"], ports=[{"host": 80, "container": 80, "protocol": "tcp"}] ), "api": ContainerConfig( image="node:18-alpine", env={"NODE_ENV": "production", "PORT": "3000"}, tags=["api", "backend"], ports=[{"host": 3000, "container": 3000, "protocol": "tcp"}] ), "database": ContainerConfig( image="postgres:15-alpine", env={ "POSTGRES_DB": "appdb", "POSTGRES_USER": "appuser", "POSTGRES_PASSWORD": "secret" }, tags=["database", "backend"], resources={"memory": "2048m", "cpus": 1.0} ), "cache": ContainerConfig( image="redis:7-alpine", tags=["cache", "backend"], resources={"memory": "512m"} ) } # Deploy all containers print("Deploying application stack...") containers = await client.batch_create(list(stack.values())) # Map container IDs container_map = {} for container, (name, _) in zip(containers, stack.items()): container_map[name] = container.cuid print(f" {name}: {container.cuid}") # Start all containers for name, cuid in container_map.items(): await client.start_container(cuid) print(f"Started {name}") # Wait for all to be running all_running = True for name, cuid in container_map.items(): if not await client.wait_for_status(cuid, "running", timeout=30): print(f"Failed to start {name}") all_running = False if all_running: print("\nStack deployed successfully!") # Check connectivity checks = { container_map["api"]: "node --version", container_map["database"]: "psql --version", container_map["cache"]: "redis-cli ping" } results = await client.batch_execute(checks, timeout=5) print("\nHealth checks:") for cuid, (stdout, stderr, exit_code) in results.items(): name = [k for k, v in container_map.items() if v == cuid][0] status = "✓" if exit_code == 0 else "✗" print(f" {name}: {status}") return container_map asyncio.run(deploy_stack()) ``` ## Error Handling ```python import asyncio from container_client import ContainerClient async def safe_operations(): async with ContainerClient("http://localhost:8080", "admin", "password") as client: try: # Attempt to get non-existent container container = await client.get_container("invalid-cuid") except Exception as e: print(f"Expected error: {e}") try: # Create container with invalid image container = await client.create_container( image="non/existent:image" ) except Exception as e: print(f"Image error: {e}") # Safe command execution with timeout try: stdout, stderr, exit_code = await client.execute_command( "some-cuid", "sleep 100", timeout=5.0 ) except TimeoutError as e: print(f"Command timed out: {e}") except Exception as e: print(f"Execution error: {e}") asyncio.run(safe_operations()) ``` ## Tips and Best Practices 1. **Always use context managers** for automatic session cleanup 2. **Handle timeouts** for long-running operations 3. **Use batch operations** for better performance with multiple containers 4. **Stream output** for long-running commands instead of waiting for completion 5. **Set resource limits** to prevent containers from consuming too many resources 6. **Use tags** to organize and filter containers 7. **Check health status** before performing operations 8. **Clean up containers** after use to free resources 9. **Use wait_for_status** to ensure containers are ready before operations 10. **Handle errors gracefully** with try-except blocks ## Requirements - Python 3.12+ - aiohttp - Container Management API server running - Valid API credentials in `.env` file ## Support For issues or questions: 1. Check API server logs: `logs/actions.jsonl` 2. Verify container status with `get_status()` 3. Use `health_check()` to verify API connectivity 4. Enable debug output by monitoring WebSocket frames