Deployment Architecture
Global infrastructure lifecycle management, scalable software delivery, and configuration compliance in a Hub-and-Spoke model.
Executive Summary
Foreman with the Katello plugin forms a centralized provisioning, configuration, and content management control plane. The system orchestrates bare-metal and virtual machine creation, acting as the singular source of truth.
Katello extends core Foreman with advanced software repository sync (via Pulp) and entitlement tracking (via Candlepin). The architecture heavily utilizes a Hub-and-Spoke model: a highly stateful Central Server offloads execution, caching, and network isolation duties to localized stateless or semi-stateful edge nodes called Smart Proxies (or Capsules). Managed instances communicate entirely via mutual TLS (mTLS) through proxies to prevent overwhelming the global backend.
Inventory Table
| Component | Function / Role | State | Criticality |
|---|---|---|---|
| Foreman Server | Web UI, API Engine, Orchestration | Stateful | Tier 1 (Core) |
| PostgreSQL | Primary datastore for Foreman/Candlepin | Stateful | Tier 1 (Core) |
| Pulp (Katello) | Content sync, package repos, Docker registry | Stateful (Disk) | Tier 1 (Core) |
| Candlepin | License & Subscription entitlements | Stateful | Tier 1 (Core) |
| Smart Proxy | Proxies DHCP/DNS/TFTP/Content to edge | Stateless/Cache | Tier 2 (Edge) |
| Puppet/Ansible | Config Management Master/CA | Cache | Tier 2 (Edge) |
| Managed Hosts | Target VMs or bare-metal endpoints | Endpoint | Tier 3 (Client) |
Component Deep-Dive
Engineering-grade descriptions of every component in the stack, based on official documentation and source internals.
Foreman Server
Stateful Tier 1 — CoreForeman is the central orchestration engine and the single source of truth for the entire infrastructure. Written in Ruby on Rails, it exposes a full REST API (also accessible via the hammer CLI) and a Web UI. It coordinates infrastructure services without directly executing any low-level operations itself — instead, it delegates to Smart Proxies.
Core Responsibilities
- Host Inventory & Lifecycle: Maintains the authoritative record for every managed host — IP allocation, OS version, hostgroup, environment, and configuration facts.
- Provisioning Orchestration: On a "Build" trigger, Foreman simultaneously instructs the relevant Smart Proxy to create DHCP reservations, DNS A/PTR records, and TFTP bootloader configs. It also generates and serves OS installation templates (Kickstart / Preseed) specific to the host.
- External Node Classifier (ENC): Acts as the source of truth for Puppet, telling the Puppet Master which classes and parameters to apply to each individual node based on Foreman Hostgroups and Parameters.
- RBAC & Multi-Tenancy: Native Organizations and Locations create isolated management planes. An admin scoped to
DC_EUcannot read or modifyDC_USresources — enforced at the API query level. - Remote Execution (REX): Pushes ad-hoc or scheduled jobs directly to host via SSH (with sudo escalation), bypassing the need for a configuration management agent for one-off tasks.
Katello Plugin
Stateful Tier 1 — CoreKatello is a Foreman plugin that adds the full content and subscription management layer to the stack. It serves as the orchestration controller between Foreman (the host management layer), Pulp (the content storage layer), and Candlepin (the entitlement layer). Without Katello, Foreman is a bare provisioner; with Katello, it becomes a full lifecycle management platform comparable to Red Hat Satellite.
Internal Workflow
- Repository Sync: Katello instructs Pulp to pull content from external sources (Red Hat CDN, EPEL, custom repos) and store it locally. This is the only time traffic leaves the environment toward the internet.
- Content Views: Katello allows creating named, versioned snapshots of one or more repositories. A Content View called
RHEL8-Prod-Baselinemight contain RHEL 8 base + Security errata only. Once published, it becomes an immutable version. - Lifecycle Environments: Content Views are promoted through ordered environments:
Library → Dev → QA → Production. A server in theDevenvironment only sees packages approved for Dev, never experimental packages intended for Library only. - Activation Keys: When a new server registers (
subscription-manager register), an Activation Key tells Katello automatically which Content View, Lifecycle Environment, and subscriptions to assign — zero manual configuration per host.
Pulp 3 — Content Repository Engine
Stateful (Disk-heavy) Tier 1 — CorePulp is the content storage and distribution backbone. It is an independent open-source project (not exclusive to Foreman) that manages repositories of software artifacts — RPMs, container images, Ansible collections, Python packages, Debian packages — using a plugin-based architecture. In a Katello deployment, Pulp is the reason a /var/lib/pulp partition needs 300 GB+.
Core Object Model (Pulp 3)
- Remote: Defines an external content source — e.g.,
https://cdn.redhat.com/content/dist/rhel8/. Contains auth credentials and sync policies (on-demand vs. immediate). - Repository & Repository Version: A repository is a collection of content units. Every sync or content modification creates a new immutable Repository Version, enabling safe rollbacks to "last known good" states.
- Publication: When clients need to install packages, Pulp generates a Publication — the full set of metadata (repomd.xml, repodata/) that DNF/YUM needs to resolve dependencies. Without this step, clients cannot install packages.
- Distribution: Maps a Publication to a URL path (e.g.,
/pulp/content/RHEL8-Prod/). This is the final step that makes content reachable by hosts over HTTPS.
Performance Architecture
Pulp uses content deduplication at the artifact level — if the same RPM appears in 10 different Content Views, it is stored on disk exactly once. An async task system (via Redis + Celery workers) handles all sync and publish operations in the background. PostgreSQL stores all metadata; actual blobs live on the filesystem or object storage.
Candlepin — Entitlement Engine
Stateful Tier 1 — CoreCandlepin is the subscription and license entitlement engine embedded in Katello. It is the same technology that powers the Red Hat Customer Portal subscription management. It answers one question at runtime: "Is this host legally authorized to access this content?"
Data Model: Subscription → Pool → Entitlement
- Subscription: A purchased contract (e.g., "500-seat RHEL Server Standard"). Imported via a signed manifest file downloaded from Red Hat's portal.
- Pool: A subscription is split into one or more entitlement pools — each pool tracks a consumable quantity. A 500-seat subscription creates a pool with capacity 500.
- Entitlement: When a host registers and attaches a subscription, it consumes one slot from the pool and receives a signed X.509 certificate that cryptographically proves its authorization. This certificate is stored in
/etc/pki/entitlement/on the host. - Consumer: Any registered host, VM, or container using an entitlement. Candlepin tracks compliance state per consumer and reports back to Katello.
Modern Simplification: Simple Content Access (SCA)
Red Hat introduced SCA mode which removes the mandatory per-host entitlement attachment step. With SCA, any registered host in the correct organization can access all entitled content without manually "attaching" a subscription — Candlepin still enforces org-level quotas but eliminates host-by-host entitlement management friction.
Smart Proxy / Capsule — Edge Service Broker
Cache / Stateless Tier 2 — EdgeThe Smart Proxy is a modular, plugin-based RESTful service deployed in every remote subnet or datacenter. It is the "hands and feet" of the Foreman Master — it physically sits on the same Layer 2 subnet as the managed hosts and handles all the low-level networking protocols that cannot traverse routed networks without special configuration.
Service Modules & Internals
- DHCP (ISC / MS / Infoblox): On host build, Foreman calls the Proxy API (
POST /dhcp/:network/:address). The Proxy creates a DHCP reservation with the host's MAC address, assigning a static IP and — crucially — settingnext-server(TFTP IP) andfilename(bootloader, e.g.pxelinux.0) so the host knows where to fetch its boot image. - DNS (BIND / FreeIPA / Infoblox): Creates Forward (A record: hostname → IP) and Reverse (PTR record: IP → hostname) DNS entries. On host deletion, both records are cleaned up atomically. This ensures
ping foreman-host-01.dc.localresolves instantly after provisioning. - TFTP: Manages files inside
/var/lib/tftpboot/. When a host enters Build mode, the Proxy writes a specific PXE config file atpxelinux.cfg/01-<MAC>. This file tells the booting host which kernel and initrd to download, and which kernel arguments (including the Kickstart URL) to pass. - Templates Proxy: Proxies HTTP requests for Kickstart/Preseed templates from the booting host to the Foreman Master. This means the host never needs a route to the Master's IP — only to the Proxy.
- Pulp Mirror (Capsule): Maintains a local copy of selected Content Views from the Master's Pulp. Hosts in this subnet pull RPMs from the Capsule, not the Master — dramatically reducing WAN bandwidth usage and centralizing failure domains.
- Puppet CA: Can host an independent Puppet Certificate Authority, signing agent CSRs locally. The Foreman UI controls cert lifecycle (list/sign/revoke) via the Smart Proxy API on port 9090.
PostgreSQL — Primary State Store
Stateful Tier 1 — CorePostgreSQL serves as the single authoritative datastore for three major subsystems simultaneously: Foreman (host inventory, configs, users), Candlepin (subscription pools, entitlements, consumers), and Pulp (repository metadata, content unit records, task state). Its health is binary — if PostgreSQL is down, the entire management plane is offline.
Per-Application Schema Isolation
- Foreman schema: Hosts, hostgroups, parameters, compute resources, users, roles, audit log (thousands of tables from Rails migrations).
- Candlepin schema: Owners, consumers, pools, entitlements, products, subscriptions — typically a separate database named
candlepin. - Pulp schema: Repository versions, artifacts, remote configurations, publications, distributions, async task status — in a separate database named
pulp.
Redis — Async Task Queue
Cache Tier 1 — CoreRedis functions as the in-memory task broker and result backend for Pulp's Celery worker fleet. When Katello triggers a repository sync, publishes a Content View, or mirrors content to a Capsule, the task is not executed inline — it is serialized and pushed into a Redis queue. Pulp workers consume these tasks asynchronously, allowing the API to remain responsive during heavy I/O operations.
Task Lifecycle
- API call (
POST /katello/api/repositories/:id/sync) → Katello creates a task record in PostgreSQL and pushes a message onto the Redis queue. - A Pulp Celery worker picks up the message, executes the sync (downloading artifacts from the Remote), and writes results back to PostgreSQL.
- Task status is visible in the Foreman UI under Monitor → Tasks, polling the PostgreSQL task record.
Puppet CA + Ansible — Configuration Management
Cache Tier 2 — EdgePuppet CA is the cryptographic identity provider for the entire configuration management trust chain. In a Foreman deployment, the Puppet CA can run on the Master or on a Smart Proxy. Every managed node's Puppet agent generates a private key and a Certificate Signing Request (CSR) on first run. Foreman controls the full certificate lifecycle through the Smart Proxy API.
Certificate Lifecycle
- Request: New host boots → Puppet agent runs → generates RSA key pair → sends CSR to Puppet CA via Smart Proxy.
- Pending: CSR appears in Foreman UI under Infrastructure → Smart Proxies → Puppet CA → Certificates with status "Requested".
- Sign: Admin (or autosign policy) approves → Foreman calls Proxy API → Proxy executes
puppetserver ca sign --certname <host>. - Enforcement: Signed agent connects to Puppet Master, receives its compiled catalog (the desired state declaration), applies all specified resources, and reports facts back to Foreman.
- Revocation: On host decommission, Foreman revokes the cert via the Proxy API, invalidating the host's cryptographic identity immediately.
Ansible Alternative (REX)
For environments without Puppet, Foreman's Remote Execution (REX) feature pushes Ansible playbooks directly to hosts via SSH. No agent required. Playbooks are stored in Foreman and executed through a REX Job Template, covering the same end state as Puppet catalogs but in an agentless, push-based model.
Hardware / Topology Diagram
flowchart LR
%% ══════════════════════════════════════════════════════
%% EXTERNAL UPSTREAM — Internet Boundary
%% ══════════════════════════════════════════════════════
subgraph Internet [" "]
direction TB
CDN(["🌐 Red Hat CDN\nEPEL / Git / Custom"])
end
%% ══════════════════════════════════════════════════════
%% SITE 0 — GLOBAL HQ / Primary Management Datacenter
%% ══════════════════════════════════════════════════════
subgraph HQ ["🏢 Site 0 — Global HQ | Primary Datacenter"]
direction LR
HQ_FW["🔥 Perimeter\nFirewall / NAT"]
subgraph HQ_MGMT ["Control Plane (Management VLAN)"]
direction TB
Master(["⚙️ Foreman/Katello\nMaster Node"])
LB["HAProxy LB\n:443 VIP"]
LB --> Master
end
subgraph HQ_DATA ["Data Plane (Storage VLAN)"]
direction TB
DB[("💾 PostgreSQL\nPrimary + Replica")]
Pulp[("📦 Pulp Content Store\n/var/lib/pulp ~TB")]
Redis["⚡ Redis\nTask Broker"]
end
subgraph HQ_DMZ ["DMZ — Web Tier"]
direction LR
SP_DMZ{{"SP-DMZ\nCapsule"}}
Hosts_DMZ["Web Servers\n& API Endpoints"]
SP_DMZ -->|"UDP 67/69\nTCP 80/443/8140"| Hosts_DMZ
end
HQ_FW -->|"TCP 443"| LB
Master --- DB
Master --- Pulp
Master --- Redis
end
CDN ==>|"HTTPS :443\nRepo Sync"| HQ_FW
Master ===>|"mTLS :9090\nContent Sync :443"| SP_DMZ
%% ══════════════════════════════════════════════════════
%% SITE 1 — REGIONAL DATACENTER EU
%% ══════════════════════════════════════════════════════
subgraph EU ["🇪🇺 Site 1 — Regional DC | EU-West"]
direction TB
EU_FW["🔥 Edge\nFirewall"]
subgraph EU_Proxy ["Capsule Layer (Redundant)"]
direction LR
SP_EU_A{{"SP-EU-A\nActive"}}
SP_EU_B{{"SP-EU-B\nStandby"}}
end
subgraph EU_Hosts ["Managed Host Pools"]
direction TB
EU_BM["Bare Metal\nCompute Cluster"]
EU_VM["VM Workloads\n(oVirt/VMware)"]
end
EU_FW --> SP_EU_A
EU_FW --> SP_EU_B
SP_EU_A -->|"DHCP/DNS/TFTP"| EU_BM
SP_EU_B -->|"YUM/Puppet"| EU_VM
end
%% ══════════════════════════════════════════════════════
%% SITE 2 — BRANCH OFFICE APAC
%% ══════════════════════════════════════════════════════
subgraph APAC ["🌏 Site 2 — Branch Office | APAC"]
direction TB
APAC_FW["🔥 Branch\nFirewall"]
SP_APAC{{"SP-APAC\nCapsule"}}
APAC_Hosts["Local Servers\n& Dev Workstations"]
APAC_FW --> SP_APAC
SP_APAC -->|"DHCP/TFTP/Content"| APAC_Hosts
end
%% ══════════════════════════════════════════════════════
%% SITE 3 — CLOUD / HYBRID (AWS / Azure)
%% ══════════════════════════════════════════════════════
subgraph Cloud ["☁️ Site 3 — Hybrid Cloud | AWS / Azure"]
direction TB
VPN_GW["VPN Gateway\n/ Direct Connect"]
SP_Cloud{{"SP-Cloud\nCapsule (VPC)"}}
Cloud_VMs["Cloud Instances\nEC2 / Azure VMs"]
VPN_GW --> SP_Cloud
SP_Cloud -->|"subscription-manager\n+ YUM :443"| Cloud_VMs
end
%% ══════════════════════════════════════════════════════
%% WAN LINKS — Between HQ Master and Remote Capsules
%% ══════════════════════════════════════════════════════
Master ===>|"WAN mTLS :9090\n& Sync :443"| EU_FW
Master ===>|"WAN mTLS :9090\n& Sync :443"| APAC_FW
Master ===>|"IPsec / TLS :9090"| VPN_GW
Software Stack Diagram
flowchart LR
%% ═══════════════════════════════════════════════════
%% INGRESS LAYER
%% ═══════════════════════════════════════════════════
Admin(["👤 Admin / DevOps\nUI + CLI (hammer)"])
Admin --> LB
%% ═══════════════════════════════════════════════════
%% SITE 0 — GLOBAL HQ CORE STACK
%% ═══════════════════════════════════════════════════
subgraph CoreStack ["🏢 Global HQ — Application Core"]
direction LR
LB["HAProxy / Nginx\n:443 VIP"]
subgraph ForemanApp ["Foreman Application"]
direction TB
API("REST API Layer\n/api/v2")
Web("Puma Workers\n(Ruby on Rails)")
Rex("Remote Execution\nJobs / SSH")
API --> Web
Web --> Rex
end
subgraph KatellaStack ["Katello Plugin"]
direction TB
KatAPI("Katello API\n/katello/api")
CV("Content View\nEngine")
LE("Lifecycle\nEnvironments")
AK("Activation Key\nManager")
KatAPI --> CV
CV --> LE
LE --> AK
end
subgraph DataLayer ["Persistent State"]
direction TB
PSQL[("💾 PostgreSQL\nforeman + candlepin + pulp")]
Redis(["⚡ Redis\nCelery Broker"])
end
subgraph ContentEngine ["Katello Subsystems"]
direction TB
Candlepin("Candlepin\nEntitlement Engine")
PulpAPI("Pulp 3 API\nDjango REST")
PulpWorker["Celery Workers\nSync + Publish"]
PulpAPI --> PulpWorker
end
LB --> ForemanApp
LB --> KatellaStack
Web --> PSQL
Web --> KatAPI
KatAPI --> Candlepin
KatAPI --> PulpAPI
Candlepin --> PSQL
PulpAPI --> PSQL
PulpWorker --> Redis
end
%% ═══════════════════════════════════════════════════
%% SITE 1 — EU CAPSULE STACK (Redundant Pair)
%% ═══════════════════════════════════════════════════
subgraph EU_Stack ["🇪🇺 EU-West — Capsule Pair"]
direction TB
subgraph EU_A ["SP-EU-A (Active)"]
direction TB
EU_A_API("Proxy API :9090")
EU_A_Pulp["Pulp Mirror"]
EU_A_DHCP["ISC DHCP"]
EU_A_DNS["BIND DNS"]
EU_A_TFTP["TFTP :69"]
EU_A_Puppet["Puppet CA :8140"]
end
subgraph EU_B ["SP-EU-B (Standby)"]
direction TB
EU_B_API("Proxy API :9090")
EU_B_Pulp["Pulp Mirror"]
end
end
%% ═══════════════════════════════════════════════════
%% SITE 2 — APAC CAPSULE
%% ═══════════════════════════════════════════════════
subgraph APAC_Stack ["🌏 APAC — Capsule"]
direction TB
APAC_API("Proxy API :9090")
APAC_Pulp["Pulp Mirror"]
APAC_DHCP["ISC DHCP"]
APAC_TFTP["TFTP :69"]
end
%% ═══════════════════════════════════════════════════
%% SITE 3 — CLOUD CAPSULE
%% ═══════════════════════════════════════════════════
subgraph Cloud_Stack ["☁️ Cloud (VPC) — Capsule"]
direction TB
Cloud_API("Proxy API :9090")
Cloud_Pulp["Pulp Mirror"]
Cloud_SubMgr["subscription-manager\ncert-based auth"]
end
%% ═══════════════════════════════════════════════════
%% CONTROL PLANE CONNECTIONS (Master → Capsules)
%% ═══════════════════════════════════════════════════
API ===>|"mTLS :9090\nOrchestration"| EU_A_API
API ===>|"mTLS :9090\nOrchestration"| EU_B_API
API ===>|"mTLS :9090\nOrchestration"| APAC_API
API ===>|"IPsec+TLS :9090"| Cloud_API
PulpWorker ===>|"Content Sync :443\nWAN"| EU_A_Pulp
PulpWorker ===>|"Content Sync :443\nWAN"| EU_B_Pulp
PulpWorker ===>|"Content Sync :443\nWAN"| APAC_Pulp
PulpWorker ===>|"Content Sync :443\nIPsec"| Cloud_Pulp
Network Routes (Critical Flow)
sequenceDiagram
autonumber
participant Ext as Upstream Repos (Red Hat/Git)
participant Master as Foreman/Katello Master
participant SP as Smart Proxy Edge
participant Host as Managed Destination Node
rect rgba(59, 130, 246, 0.05)
note over Ext, Master: 1. Core Content Hydration (Global Reach)
Master->>Ext: HTTPS (443) Pull Remote Upstream Packages
Ext-->>Master: Content blobs synced directly into Pulp Storage
end
rect rgba(52, 211, 153, 0.05)
note over Master, SP: 2. Edge Content Federation (Regional Spread)
SP->>Master: HTTPS (443) Secure Katello Sync via internal mTLS
Master-->>SP: Optimized Content/Views Mirroring distribution
end
rect rgba(245, 158, 11, 0.05)
note over SP, Host: 3. Bare-Metal Provisioning Sequence (Local Subnet Boundaries)
Host->>SP: UDP (67) DHCP Discover broadcast
SP-->>Host: UDP (68) DHCP Offer (Injects Next-Server IP Target)
Host->>SP: UDP (69) TFTP GET request for pxelinux.0
SP-->>Host: Streaming PXE boot images payload (Kernel/Initrd)
end
rect rgba(167, 139, 250, 0.05)
note over Master, Host: 4. OS Installation Architecture & Post-Script
Host->>SP: HTTPS (443) Fetch Kickstart/Preseed Provisioning Profiles
Host->>SP: HTTPS (443) Stream YUM/APT RPMs and Operating System Payloads
Host->>Master: HTTPS (443) API: Internal Foreman Global Registration complete
end
Reachability Matrix
| From (Source) | To (Destination) | Port / Protocol | Security / Auth |
|---|---|---|---|
| Admin Endpoints | Foreman Server | TCP 80 / 443 | TLS + LDAP/SSO/SAML |
| Foreman Server | External Repos | TCP 443 | Client Credentials |
| Foreman Server | Smart Proxy | TCP 9090 | Katello CA (mTLS) |
| Smart Proxy | Foreman Server | TCP 443 | Katello CA (mTLS) |
| Managed Host | Smart Proxy | UDP 67/68, 69 | L2 Trust / Subnet local |
| Managed Host | Smart Proxy | TCP 80 / 443 | Host Cert mTLS |
| Master/Proxy | Managed Host | TCP 22 | SSH Keys (sudoers) |
Setup & Dependency Order
- Environmental Prep: Deploy Master VM instance, configure LVMs partitions (dedicating 300GB+ to
/var/lib/pulp), reserve static IPs mapping to valid A/PTR DNS records. - Infrastructure: Stand up PostgreSQL database cluster.
- Master Deployment: Run the monolithic install via
foreman-installer --scenario katello. - Content Logic: Add proper Manifest subscriptions, Sync external Repos, generate Content Views.
- Proxy CA Generation: Request Proxy Certificates via the Foreman API.
- Edge Proxy Rollout: Deploy Smart Proxy instances in remote subnets cu
foreman-installer --scenario capsulesecurely passing the generated certs. - Service Setup: Bind subnets to Proxies and enable overlays (DHCP/DNS/TFTP).
- Client Booting: Start VMs/Hardware on Proxy-managed networks to trigger PXE installations.
Plenary Control & Data Separation
Control Plane
The Foreman Server orchestrates metadata logic (Hostgroups, Content view versions, IP allocations, user RBAC). All decisions are stored via Database transits.
Data/Content Plane
Pulp module and its synchronized file layers. The heaviest I/O sits here. Repositories are pulled at the core and duplicated to edge Smart Proxies.
Provisioning Plane
The actual subnets where the DNS mappings, IPv4 leasing, and PXE boot images traverse. Governed by the edge Smart Proxies.
Observability Plane
Hosts check in using subscription-manager or Puppet Facts, shipping configuration deltas back to Foreman.
Critical Paths
- Certificate Trust Chains: All host connectivity roots back to the Katello-generated interior CA. If that CA expires, API spanning 443, 9090, and 8140 instantly drops network trusts.
- Content Promotion: Upstream Repository -> Sync Task -> Content View Generate -> Promote. Task freezing here prevents patching zero-days.
- Provisioning Network Relay: DHCP -> TFTP
pxelinux.0-> HTTPks.cfg. If Smart Proxies lack network IP Helper addresses, bare-metal builds fail.
Failure Domains
Outage: Master Node
Blast Radius: Global
Prevents global visibility, orchestrations, deployments, and patching tasks. Caching edge continues functioning.
Outage: Smart Proxy
Blast Radius: Local Subnet
The localized datacenter loses provisioning capabilities and ceases caching YUM updates.
Outage: Pulp Storage Full
Blast Radius: Global Sync System
Extremely frequent issue. Disk completely fills; DB tasks halt in a queue freeze.
Troubleshooting Flowcharts (Provisioning)
flowchart LR
Start([User initiates host build via Edge Proxy]) --> A{Does Host
see DHCP?}
A -- No --> B[Check Top of Rack Switch:
IP Helper Address]
A -- Yes --> C{Does it pull
TFTP kernel?}
B --> FIN_A(((Resolve Network)))
C -- No --> D[Check UDP 69 on Smart Proxy.
Verify firewalld ports.]
C -- Yes --> E{Does it fetch Kickstart
over HTTP?}
D --> FIN_B(((Fix Proxy TFTP)))
E -- No --> F[Verify Host is in correct Hostgroup
& OS templates are synced]
E -- Yes --> G(((Provision successful,
OS installer runs)))
Security Boundaries
- Internal CA Barrier: No node can interact with
port 9090without being structurally signed by the internal Katello root CA. - Tenant Isolation: Native RBAC creates isolated Organizations. An admin assigned to "DC_EU" has zero visibility into "DC_US" nodes.
- Remote Execution Security: Ansible Remote Execution requires SSH keys that are explicitly locked via server-side
/etc/sudoersto only recognized commands. - Secret Storage: Compute Resource credentials are encrypted at rest in the PSQL datastore.
Risks and Improvements
Ruby/Pulp Worker RAM Saturation
Large deployments suffer memory bloat causing unresponsive UI and API queuing timeouts.
Improvement: Externalize Redis and Postgres to highly tuned clusters. Tune Apache Passenger thread counts.
Active-Active High Availability Limitations
True HA is structurally hostile in Katello due to Pulp synchronization locking states natively.
Improvement: Shift to Active/Passive failover with shared highly-available SAN storage.
Fragile Upgrades
Katello increments are notoriously destructive if a step release is missed, corrupting Ruby migrations.
Improvement: Enforce strict VM Snapshot workflows or GitOps pipeline rollbacks prior to operations.
Concept Analogy
Imagine a global manufacturing corporation. The central node (Foreman Master) is the Global Operations Headquarters. It sets all standard operating procedures, signs off on the supply chain contracts (Candlepin), and houses the master blueprint designs (Pulp).
But headquarters can't afford to individually ship every single screwdriver or blueprint to a factory 5,000 miles away every day—it would overload the shipping lanes (the network).
So, we build Regional Distribution Hubs (Smart Proxies) near every major factory. Operations Headquarters sends one massive optimized shipment to the Regional Hub at night. The next day, when a factory needs to build machines, supply IP addresses (DHCP), or grab patching materials, they walk directly next door to their local Regional Hub.
This accelerates efficiency, secures standard operational compliance at the edge, and ensures our entire network is resilient while Operations retains 100% visibility.