Implementation Details
Deep dive into the framework's architecture, components, and implementation details.
Architecture Overview
Conductor Framework follows a modular architecture with clear separation of concerns. The data flow is:
Component Interaction
- Framework loads embedded manifests and applies template rendering
- Manifests are stored and indexed for quick access
- Server exposes REST API and Web UI
- Reconciler applies manifests to Kubernetes cluster
- Events are tracked and stored for monitoring
- CRD client provides parameter management
Core Components
Framework Core
Location: pkg/framework/framework.go
The main entry point provides framework.Run() which orchestrates the entire framework lifecycle.
Initialization Sequence
- Logger Setup: Initialize zap logger with development configuration
- Config Validation: Validate all required configuration fields
- Kubernetes Client Setup: Create clientset and dynamic client (with fallback if unavailable)
- Manifest Loading: Load and template manifests with parameter injection
- Server Creation: Initialize HTTP server with pre-loaded manifests
- Lifecycle Management: Start server and handle graceful shutdown
Configuration Management
The Config struct holds all framework configuration:
- Application metadata (name, version)
- Manifest configuration (FS, root path, template functions)
- Storage configuration (data path)
- Server configuration (port)
- Logging configuration (retention, cleanup interval)
- CRD configuration (group, version, resource)
Server
Location: pkg/framework/server/
Manages the HTTP server lifecycle and routes.
Key Responsibilities
- HTTP server setup using standard library
- Route registration for REST API and Web UI
- Middleware stack (CORS, logging)
- Lifecycle management (Start, WaitForShutdown, Close)
- Graceful shutdown handling
Reconciler
Location: pkg/framework/reconciler/
Handles Kubernetes resource reconciliation with concurrency control.
Key Features
- Client Setup: Kubernetes clientset and dynamic client
- Resource Application: Apply manifests to cluster using server-side apply
- Concurrency: Max 10 concurrent reconciliations using semaphores
- Error Recovery: Retry logic and error tracking
- Orphaned Resource Cleanup: Remove resources no longer in manifests
- Caching: GVK and resource name caching for performance
Manifest System
Location: pkg/framework/manifest/
Loads and processes Kubernetes manifests with template rendering.
Features
- Embedded Filesystem Loading: Uses Go's
embed.FS - Template Rendering: Go template engine with Sprig functions
- Sprig Integration: 60+ functions (excluding env/expandenv for security)
- Parameter Injection: From CRD spec or defaults
- File System Helper:
.Files.Get()for template file access - Custom Functions: User-defined template functions support
Store
Location: pkg/framework/store/
Manages manifest storage and indexing.
Capabilities
- In-memory manifest storage (map-based)
- Indexing system for quick lookups
- Query capabilities by resource key
- Persistence layer (BadgerDB for overrides)
Events
Location: pkg/framework/events/
Event tracking and storage system.
Event System
- Event Types: error, success, info, warning
- Event Structure: ID, timestamp, type, resourceKey, message, details
- Storage Backend: BadgerDB for persistence
- Querying: Filter by resource, type, time range
- Retention: Configurable retention policies
- Cleanup: Scheduled cleanup of old events
CRD Client
Location: pkg/framework/crd/
Manages DeploymentParameters CRD interactions.
Functionality
- Dynamic client usage for unstructured resources
- Parameter retrieval (
GetSpec) - Spec merging (global + service-specific)
- Fallback to defaults when Kubernetes unavailable
- Default CRD:
conductor.io/v1alpha1/DeploymentParameters
API Layer
Location: pkg/framework/api/
REST API handlers and Web UI integration.
Components
- HTTP handler structure
- Request/response patterns (JSON)
- Structured error handling
- YAML and parameter validation
- Web UI HTML templates
- Route organization (REST + Web)
Database
Location: pkg/framework/database/
BadgerDB integration for persistent storage.
Usage
- Event storage (time-series data)
- Manifest override persistence
- Transaction handling
- Configurable data path
Key Design Patterns
Dependency Injection
Components receive dependencies through constructors, making testing easier and dependencies explicit.
Interface-Based Design
Uses interfaces (like logr.Logger) to allow for different implementations and easier testing.
Error Wrapping
Uses Go 1.13+ error wrapping with fmt.Errorf and %w verb for error chains.
Context Propagation
Uses context.Context throughout for cancellation and timeout handling.
Resource Management
Uses defer for cleanup and proper resource management (database connections, file handles).
Concurrency Model
Goroutine Usage
- Reconciliation operations run in goroutines
- Event cleanup runs in background goroutines
- HTTP server handles requests concurrently
Channel Patterns
reconcileCh: Channel for reconciliation triggersfirstReconcileCh: Channel to signal first reconciliation
Mutex Usage
sync.Mutex: For exclusive accesssync.RWMutex: For read-write locks- Used for cache updates and shared state
Concurrency Limits
Semaphores limit concurrent reconciliation operations to 10 to prevent resource exhaustion.
Error Handling
Error Types
Custom error types in pkg/framework/errors/ for different error categories.
Error Wrapping
Errors are wrapped with context using fmt.Errorf and %w to preserve error chains.
Logging Patterns
Structured logging with logr interface, providing consistent log format and levels.
User-Facing Errors
API errors are formatted as structured JSON responses with error codes and messages.
Event Storage
All errors are stored as events for later analysis and debugging.
Performance Considerations
Caching Strategies
- GVK cache: Caches GroupVersionKind lookups
- Resource name cache: Caches resource name mappings
- Reduces Kubernetes API calls
Resource Pooling
Kubernetes clients are reused across requests to avoid connection overhead.
Optimization Techniques
- Concurrent reconciliation with limits
- Batch operations where possible
- Efficient data structures for lookups
Scalability Limits
- Max 10 concurrent reconciliations
- BadgerDB performance characteristics
- In-memory manifest storage limits
Next Steps
- Read about Design Concepts to understand the philosophy
- Check out Examples to see it in action
- Explore the source code on GitHub