Deep research report on a Just‑in‑Time Access Broker for AI agents
Executive summary
The project concept is a runtime authorisation control plane (“broker”) for AI agents that evaluates each access attempt at request time, optionally routes risky actions for human approval, and—only after approval—issues short‑lived, tightly scoped credentials that downstream systems enforce. A deliberately small “simulation lab” exists primarily to prove the end‑to‑end flow (auto‑approve, approve‑required, deny) in a demoable, testable way. fileciteturn0file0
This framing is well aligned with established zero trust architecture ideas: per‑request access decisions, policy decision/enforcement points, and minimising implicit trust zones. [1] A broker that centres workload identity (non‑human identities) is timely because workload identities are difficult to manage safely (no MFA, inconsistent lifecycle controls) and are explicitly called out as higher‑risk in identity governance guidance. [2]
The idea is strongest as a portfolio or internal platform prototype when it stays narrowly scoped: a high‑quality authorisation decisioning + approval workflow + token issuance/enforcement loop, with first‑class auditability and predictable failure modes. fileciteturn0file0 The biggest “product risk” is differentiation: commercial platforms already provide just‑in‑time (JIT) elevation and approvals (albeit typically for humans or infrastructure access), so the project must clearly position agent runtime intent brokering as the core novelty and keep the simulation harness from becoming a second product. [3]
What will make this project compelling to reviewers (and closest to real enterprise constraints) is (a) deterministic policy as the decision authority, (b) standard token semantics (validated issuer/audience/expiry; tight scopes), (c) durable approval workflows and idempotent issuance, and (d) audit trails + observability that make the system operable. [4]
Project restatement: goals and scope
Concise restatement of goals
Build a Just‑in‑Time Access Broker that sits between AI agents and protected internal APIs/resources to:
1) Authenticate the agent as a workload using a strong external
identity root (initially,
Microsoft[5]
workload identities and workload identity federation are a plausible
anchor because they are designed to let workloads access protected
resources without long‑lived secrets).
[6]
2) Authorise each request at runtime using deterministic policy
(e.g., policy‑as‑code), yielding one of three outcomes: auto‑approve,
require approval, or deny.
[7]
3) Orchestrate approvals for high‑risk requests with a durable
workflow that records human decisions and rationales.
4) Issue short‑lived, resource‑scoped tokens only when approved, and
ensure downstream protected APIs validate those tokens
(issuer/audience/scope/expiry) before serving data or executing actions.
[8]
5) Log a replayable audit trail across identity verification →
policy match → decision → approvals → token minting → token use.
fileciteturn0file0
6) Provide a small Agent Access Simulation Lab strictly as a
demonstrator/test harness to exercise the broker end‑to‑end.
fileciteturn0file0
Scope boundaries and open‑ended options
Because the wider domain and target users can be treated as open‑ended, you can explicitly present two layers of scope:
- Baseline (recommended) scope: internal platform/security teams securing “agent tooling” against internal APIs (knowledge base, reporting, ops actions) with JIT credentials and approvals. fileciteturn0file0
- Optional variants (choose one, not all, to keep the project coherent):
- Multi‑cloud workload identity roots (e.g., Amazon Web Services[9] IRSA for Kubernetes pods, and Google[10] Workload Identity Federation) to demonstrate portability. [11]
- Standards‑based “security token service” behaviour using OAuth 2.0 Token Exchange semantics (token-in → token‑out) rather than bespoke minting. [12]
- Proof‑of‑possession token hardening (DPoP or certificate‑bound tokens) as an advanced security feature. [13]
Reference architecture and user flows
Architectural framing: zero trust components mapped to the broker
In the NIST zero trust model, access is mediated by a policy decision point (PDP) and policy enforcement point (PEP), emphasising granular, least‑privilege, per‑request decisions. NIST[14] defines zero trust as minimising uncertainty in “least privilege per‑request access decisions” and explicitly describes PDP/PEP as the abstract model for granting access. [1]
A clean conceptual mapping is:
- Broker core: PDP + token service (decides, then provisions credentials).
- Protected API gateway / service middleware: PEP (enforces the broker-issued token claims and scopes).
- Workflow engine: durable approval orchestration (so “pending approval” is reliable and auditable). [15]
- Policy engine: deterministic policy evaluation (OPA/Rego is a common choice for policy‑as‑code). [16]
- Audit + observability: end‑to‑end traceability (e.g., OpenTelemetry traces/metrics/logs correlation). [17]
User/technical flowchart
flowchart TD
A[Agent workload] -->|OIDC/workload identity token| B[Access Broker API]
B --> C{Verify workload identity}
C -->|valid| D[Map to internal agent profile]
C -->|invalid| X[Deny + audit]
D --> E[Deterministic policy eval]
E -->|auto-approve| F[Mint short-lived scoped token]
E -->|needs approval| G[Create approval workflow + pending ticket]
E -->|deny| X
G --> H[Approver UI]
H -->|approve| F
H -->|deny| X
F --> I[Protected API / PEP middleware]
I -->|validate iss/aud/exp/scope| J[Execute action + respond]
I -->|invalid/expired| K[Reject + audit]
B --> L[(Audit event store)]
I --> L
H --> L
This flow is consistent with guidance that token validation should check signature/issuer against discovery metadata (and validate audience and lifetime), rather than trusting token contents implicitly. [18]
Milestone timeline (indicative)
timeline
title Milestones for a focused MVP (part-time, ~4–6 weeks)
Week 1 : Finalise threat model, request schema, decision model (allow / pending / deny)
: Choose identity root (Entra workload identity federation) and token format (JWT)
Week 2 : Implement identity verification + agent profile mapping
: Implement policy evaluation + decision logging
Week 3 : Implement approval workflow (durable state machine) + minimal approver UI
: Add idempotency keys for requests + token minting
Week 4 : Implement protected API enforcement (PEP) + end-to-end demo scenarios
: Build the simulation lab (3 agents, 3 protected APIs)
Week 5 : Observability (traces/metrics), audit replay UI, negative testing (replays, expiry)
: Security hardening (rate limits, fail-closed defaults)
Week 6 : Validation experiments + polish (docs, demo script, benchmarks)
: Optional: LLM explanation service (non-authoritative) behind a kill switch
The workload identity federation premise—secure access “without managing secrets”—is explicitly a key motivation in workload identity federation guidance, making it a strong narrative anchor for Week 1–2 work. [19]
Strengths and opportunity
Technical strengths
The project is technically strong because it demonstrates a full control‑plane loop: identity verification, policy decisioning, workflow orchestration, token provisioning, downstream enforcement, and auditing. fileciteturn0file0 This is closer to real enterprise access systems than a typical “RBAC demo” because the protected service must enforce the broker’s token and scope checks, and the broker’s issuance must be idempotent and auditable to be credible. [20]
The concept is also compatible with standard protocol patterns:
- OAuth 2.0 provides an explicit model for obtaining limited access with an approval interaction (useful when you conceptually model “approval gating” as part of credential issuance). [21]
- OAuth bearer tokens are powerful but risky because any holder can use them; this directly motivates short lifetimes, scoping, and strong transport/storage hygiene. [22]
- JWTs are widely used for claims‑based access tokens; their security depends on careful verification, and best current practices exist specifically to prevent common implementation mistakes. [23]
Market relevance for the “agent era”
Even without committing to a specific product market, the underlying organisational problem is coherent: workload identities (applications, services, scripts, containers) need access—but they often lack MFA and can be harder to lifecycle‑manage, which can raise compromise risk. [24] A broker that makes access decisions at request time and issues short‑lived scoped credentials is a practical response to that risk profile. [25]
From a portfolio perspective, this creates credible “systems depth” storylines:
- Zero trust aligned per‑request authorisation choke point. [26]
- Durable workflow engineering for approvals and expiry. [15]
- Observability instrumentation and audit replay (a common real-world requirement). [27]
Team and novelty strengths (assuming a small team or solo builder)
The proposal has a strong “doable but non‑trivial” shape if you keep the simulation lab deliberately minimal and focus depth on correctness, failure behaviour, and auditability. fileciteturn0file0 The novelty is not “agents are cool”, but that the broker treats agents as non‑human actors whose access should be dynamically brokered based on declared intent (action/resource) and policy, with human approvals for dangerous actions.
A particularly strong novelty choice (already implied in the concept) is to keep LLMs out of the authorisation decision and use them only to generate explanations/risk summaries. That mitigates the security risk of non‑deterministic decisioning while still showing practical AI integration. fileciteturn0file0
Weaknesses, risks, and constraints
Technical feasibility risks
Identity integration is frequently the first “hidden iceberg”. For a Microsoft identity‑rooted design, token validation guidance emphasises signature and issuer validation (e.g., via OpenID discovery metadata), which is easy to get subtly wrong if you skip discovery, mis-handle multi‑tenant issuers, or accept tokens for the wrong audience. [28]
Token security is the second iceberg: bearer tokens confer authority to anyone who possesses them, which means accidental logging, leakage, or replay can become an incident. [22] You can mitigate by tightening TTLs/scopes and following JWT best practices, but if you want to go further you’ll need “sender constrained” tokens (DPoP or mTLS-bound tokens), which adds complexity and may be inappropriate for an MVP. [29]
A third risk is over‑engineering the workflow engine. Durable execution platforms are excellent for approval orchestration, retries and long‑running state, but they impose operational overhead (and, if using managed versions, cost floors). [30] A portfolio MVP can mitigate this by (a) implementing a small state machine first (Postgres + transactional outbox), then (b) swapping in a workflow engine only if it clearly adds value.
Market and positioning risks
The largest market risk is “shadowing”: nearby products already satisfy overlapping needs (JIT role elevation, access requests, dynamic secrets, privileged machine access). [31] If you pitch this as a general IAM/PAM replacement it will look implausible. The correct positioning is narrower: runtime intent brokering for non‑deterministic agent actors, with explicit three‑way outcomes (allow/pending/deny) and downstream enforcement as first‑class deliverables. fileciteturn0file0
Regulatory and compliance risks
If audit logs capture identifiers that can be personal data (names, emails, device identifiers, approver identities), you may fall under GDPR‑like regimes. In the UK context, the Information Commissioner's Office[32] highlights that Article 5 UK GDPR sets out the core data protection principles. [33] The broker’s design should therefore include explicit retention policies, purpose limitation, and minimised logging by default (log what you need to secure and audit, not everything available). [34]
Ethical and safety risks (especially around LLM assistance)
If the “explanation service” uses a hosted LLM, there is a non‑trivial risk of leaking sensitive data (resource names, request details, internal policy logic) to a third party if prompts are not carefully minimised/redacted. OWASP’s Top 10 for LLM applications explicitly calls out risks such as prompt injection and insecure output handling, which are highly relevant to any system that displays model-generated rationale to approvers. OWASP[35] [36]
A second ethical risk is decision theatre: approvers may over‑trust fluent model explanations. The mitigation is to ensure the UI treats deterministic policy match + concrete request facts as primary and labels model output as advisory only, preferably with a kill switch and strict fallbacks (e.g., deterministic templated explanations). [37]
Cost and timeline risks
The idea is plausible as a 4–6 week part‑time MVP if you keep the simulation lab small and focus on one identity ecosystem. fileciteturn0file0 Costs can spike if you adopt managed workflow platforms (minimum monthly tiers) or heavy hosted observability/log retention. [38]
Competitive comparison
The broker’s value proposition overlaps with JIT access requests, privileged identity management, dynamic secrets, and privileged access management. The table below grounds that landscape and clarifies differentiators.
| Project / competitor | Feature set most relevant to this idea | Typical target users | Funding/status | Key differentiators vs proposed broker |
|---|---|---|---|---|
| Proposed JIT Access Broker for AI agents + simulation lab | Runtime intent requests; deterministic allow/pending/deny; approval workflow; short‑lived scoped tokens enforced by protected APIs; replayable audit | Security engineering, IAM, platform teams deploying internal agents | Portfolio/MVP-stage concept fileciteturn0file0 | Purpose-built around agent “intent” and tool/API authorisation, not general human login or server access |
| Teleport[39] | JIT access requests with approvals; temporary role/resource access; identity governance around infra/resources; positioning includes humans/machines/agents | Infra/platform/security teams managing access to servers, k8s, DBs, etc. | Commercial open-core product | Strong infra access + governance; not primarily an agent-to-API authorisation broker (unless adapted); the proposed project can be narrower and more “API token issuer + PEP enforcement”-centric [40] |
| Microsoft Entra Privileged Identity Management[41] | Eligible role assignments; on-demand activation; approval and audit history for privileged roles | Enterprises using Microsoft identity governance for admins/users | Commercial cloud service | Human/admin role activation focus; the proposed broker treats workloads/agents as first-class and makes per-call decisions based on intent and resource context [42] |
| HashiCorp[43] | Dynamic secrets with leases/TTLs; renew/revoke lifecycle; secrets engines for credentials; short-lived credentials guidance | Platform/security teams managing secrets & identity for workloads | Part of IBM (acquisition completed Feb 2025) | Excellent at credential issuance/lifecycle; not an approval-centric runtime intent broker by default; proposed broker can integrate a secrets manager but still add policy + approvals + structured agent intent layer [44] |
| CyberArk[45] | JIT access to target machines/endpoints; time-limited elevation/access flows documented; broad PAM suite positioning | IT/security operations, PAM teams | Commercial vendor (NASDAQ: CYBR) | Strong privileged access and endpoint focus; agent-to-API token brokering is not its centre of gravity; proposed broker can be more developer‑platform oriented and easier to demo end‑to‑end [46] |
The competitive implication is that “JIT” itself is not novel; the novelty must be a crisp combination of agent intent, runtime decisioning, approval gating, and downstream enforcement in a cohesive minimal system. [47]
Recommended improvements, roadmap, MVP, validation, and economics
Prioritised improvements to make the concept more rigorous
The following changes produce the largest credibility gain per unit effort:
First, explicitly adopt a standards‑shaped token story. Even if you don’t fully implement OAuth token exchange, your broker-issued tokens should behave like well-formed access tokens: strict audience, short expiry, narrow scopes, and verification per published best practices. [48] If you do want the “token exchange” narrative, RFC 8693 provides a clean framing for exchanging one security token for another (STS behaviour). [49]
Second, treat bearer-token leakage as a first-class threat. Bearer tokens can be used by any holder, so you should design against accidental disclosure (logs, traces, error messages) and consider optional sender-constrained upgrades (DPoP / mTLS-bound tokens) as a v2 hardening path. [50]
Third, make identity verification demonstrably correct. Microsoft identity platform guidance explicitly notes validating signature/issuer via OpenID discovery metadata; build negative tests around wrong issuer/audience/tenant and expired tokens. [28]
Fourth, make audit replay and observability part of MVP, not polish. OpenTelemetry’s spec emphasises normative requirements for compliant telemetry; practically, traces spanning broker → workflow → protected API are the easiest way to make the system debuggable in demos. [51]
MVP definition that fits the “concise but rigorous” constraint
A compelling MVP is the smallest system that proves the three outcomes and the enforcement loop:
- One broker API with: identity verification → agent mapping → policy evaluation → decision persistence → token issuance (only on approved) → audit event emission. fileciteturn0file0
- One approval UI with: pending list, request detail, matched policy, approve/deny, reason; no complex delegation trees. fileciteturn0file0
- Two to three protected mock APIs that validate the broker-issued token (issuer/audience/exp/scope) and reject mismatches. [52]
- Simulation lab limited to three agents and three scenarios (auto‑approve, approval‑required, deny), run via Docker Compose. fileciteturn0file0
Validation experiments and decision metrics
You can validate fit without “selling to enterprises” by testing whether the broker improves security and operator experience:
- Policy correctness experiment: generate a corpus of representative requests, run them through the policy engine, and confirm stable outcomes (golden tests). (OPA is widely used as a policy engine with Rego.) [53]
- Abuse-case tests: replay tokens after expiry; attempt wrong audience; attempt scope escalation; ensure denials are explicit and audited. Token validation and claim checks are central to Microsoft identity platform guidance and JWT best practices. [54]
- Approver usability drill: time-to-decision with and without model-generated rationale; measure approval latency and reversal rate. If you use LLM explanations, align mitigations with OWASP LLM risks (prompt injection, insecure output handling). [36]
- Performance benchmark: P95 broker latency for deterministic decisions; approval queue time separated from decision latency. NIST’s framing emphasises per-request decisions while maintaining availability and minimising delays. [1]
Concrete KPIs that are meaningful for this system:
- Decision latency (P50/P95) for rule-only path
- Approval cycle time (median and tail)
- Auto-approve share vs approval-required share vs deny share
- Token issuance success rate; token validation failure rate at PEP
- Audit completeness: % requests with full lifecycle events (request→decision→(approval)→issuance→use/deny)
- “Policy drift” incidents: cases where approvers override policy intent (should be rare, and analysed)
Indicative budget ranges and timelines
Costs depend heavily on deployment choices. A portfolio MVP can be essentially free locally, then modest for a cloud demo. Public pricing pages show (a) Azure Container Apps includes a free monthly allocation and consumption pricing, (b) managed Postgres and Redis are paid services, and (c) managed workflow engines often have minimum monthly tiers. [55]
| Cost category | Local-only MVP | Small cloud demo (single tenant) | Production-like pilot (single team) |
|---|---|---|---|
| Compute (broker + demo services) | £0–£30/mo | £20–£150/mo | £150–£800/mo |
| Managed Postgres | £0 (local) | £30–£200/mo | £200–£1,000+/mo |
| Managed Redis/cache | £0 (local) | £15–£150/mo | £150–£800+/mo |
| Workflow engine | £0 (DIY state machine) | £0–£200/mo | £100+/mo (managed plans often have minimums) |
| Observability/logging backend | £0–£50/mo | £20–£200/mo | £200–£2,000+/mo |
| Hosted LLM explanations (optional) | £0 | £5–£200/mo | £200–£2,000+/mo |
| Approx. timeline (part-time) | 4–6 weeks | 5–8 weeks | 8–12+ weeks |
Notes on grounding: - Azure Container Apps explicitly advertises a free tier allocation and consumption pricing model. [56]
- Azure Database for PostgreSQL Flexible Server is a paid managed database with region/plan dependent pricing. [57]
- Azure Cache for Redis has tiered pricing and reserved capacity options. [58]
- Temporal Cloud plan pricing includes minimum monthly pricing by tier (useful as a “cost floor” reference). [59]
- OpenAI API pricing is publicly stated per 1M tokens (model-dependent); Anthropic’s Claude API pricing is similarly documented. OpenAI Anthropic [60]
The key economic recommendation for MVP is to keep managed services optional: run Postgres/Redis locally, avoid a managed workflow engine until needed, and delay LLM integration until deterministic foundations and logging redaction are complete. [61]
Alternatives, pivots, and further resources
Alternative approaches if major risks materialise
If differentiation against established JIT/PAM tooling feels weak, or if identity integration becomes too heavy, the most credible pivots preserve the core “runtime intent + enforcement” insight:
A pivot to a portable workload identity broker: replace the single-vendor identity root with SPIFFE/SPIRE (workload identities expressed as SPIFFE IDs; SVIDs delivered via the workload API). SPIFFE explicitly defines workload identity concepts and SPIRE implements attestation + issuing SVIDs in heterogeneous environments. [62] This pivot strengthens multi‑cloud credibility, but it is best reserved for v2 because it adds ecosystem complexity.
A pivot to a standards-centric token exchange service: implement a minimal RFC 8693 token exchange endpoint (token-in → token-out), backed by policy and approvals. This makes the broker feel like a real STS rather than a custom JWT minting service. [12]
A pivot to “policy simulator + audit replay” as the flagship: if building a robust token issuer + PEP enforcement takes too long, you can reduce scope by keeping enforcement in mocked services but doubling down on policy testing, what-if simulation, and audit replay (a realistic pain point for policy-based systems). This still aligns with NIST’s emphasis on per-request decisions and the need to evaluate access requests dynamically. [26]
Suggested resources, tools, and primary sources
For a rigorous build, the most valuable primary/official sources to consult are:
- Zero trust architecture foundations: NIST SP 800‑207 (PDP/PEP model; per-request least privilege). [26]
- Workload identity grounding: Microsoft Entra workload identities overview and workload identity federation overview; plus how OIDC federation works in AKS and similar platforms. [63]
- Token and authorisation standards: OAuth 2.0 (RFC 6749), bearer token usage (RFC 6750), JWT (RFC 7519), JWT best practices (RFC 8725), token exchange (RFC 8693). [64]
- Hardening options (v2): DPoP (RFC 9449) and mutual‑TLS bound tokens (RFC 8705). [65]
- Policy-as-code implementation: OPA docs/repo and Rego learning resources. [66]
- Workflow durability: Temporal workflow execution concepts; pricing/operations considerations if managed. [67]
- Observability: OpenTelemetry specification and signal concepts (traces/metrics/logs correlation). [51]
- AI safety/security framing for any LLM-assisted explanations: OWASP Top 10 for LLM Applications; NIST AI RMF 1.0 (trustworthiness and risk management framing). [68]
- Strategic “why now” references: CISA’s Zero Trust Maturity Model as an implementation planning artefact that many organisations reference when adopting zero trust strategies. CISA [69]
- Regulatory hygiene starter: UK GDPR principles summaries from the Information Commissioner’s Office (useful for logging/retention design). [34]
Finally, for market context, consider scanning (even if paywalled) the latest PAM / identity governance market guides and analyst reports—then use what you can validate through official vendor docs to avoid building on commentary-only claims. The competitive table above already shows the overlap areas you should explicitly address. [31]
[1] [7] [26] [41] [43] Zero Trust Architecture
https://nvlpubs.nist.gov/nistpubs/specialpublications/NIST.SP.800-207.pdf
[2] [14] Microsoft Entra Conditional Access for workload identities
[3] [31] [40] [47] Just-in-Time Access Requests
https://goteleport.com/docs/identity-governance/access-requests/?utm_source=chatgpt.com
[4] [16] [53] [66] Open Policy Agent (OPA)
https://github.com/open-policy-agent/opa?utm_source=chatgpt.com
[5] [38] [59] Temporal Cloud pricing | Temporal Platform Documentation
https://docs.temporal.io/cloud/pricing?utm_source=chatgpt.com
[6] [24] [39] [63] Workload identities - Microsoft Entra Workload ID
[8] [10] [18] [28] [52] [54] Access tokens in the Microsoft identity platform
https://learn.microsoft.com/en-us/entra/identity-platform/access-tokens?utm_source=chatgpt.com
[9] [69] Zero Trust Maturity Model
https://www.cisa.gov/zero-trust-maturity-model?utm_source=chatgpt.com
[11] IAM roles for service accounts
[12] [49] RFC 8693: OAuth 2.0 Token Exchange
https://www.rfc-editor.org/rfc/rfc8693.html?utm_source=chatgpt.com
[13] [65] OAuth 2.0 Demonstrating Proof of Possession (DPoP)
https://www.rfc-editor.org/rfc/rfc9449.html?utm_source=chatgpt.com
[15] [30] [67] Temporal Workflow Execution overview
https://docs.temporal.io/workflow-execution?utm_source=chatgpt.com
[17] [27] [51] OpenTelemetry Specification 1.55.0
https://opentelemetry.io/docs/specs/otel/?utm_source=chatgpt.com
[19] [25] Workload Identity Federation - Microsoft Entra
[20] [22] [48] [50] [61] RFC 6750: The OAuth 2.0 Authorization Framework
https://www.rfc-editor.org/rfc/rfc6750.html?utm_source=chatgpt.com
[21] [64] RFC 6749: The OAuth 2.0 Authorization Framework
https://www.rfc-editor.org/rfc/rfc6749.html?utm_source=chatgpt.com
[23] RFC 7519: JSON Web Token (JWT)
https://www.rfc-editor.org/rfc/rfc7519.html?utm_source=chatgpt.com
[29] RFC 8725: JSON Web Token Best Current Practices
https://www.rfc-editor.org/rfc/rfc8725.html?utm_source=chatgpt.com
[32] [58] Azure Cache for Redis pricing
https://azure.microsoft.com/en-us/pricing/details/cache/?utm_source=chatgpt.com
[33] [34] [35] A guide to the data protection principles | ICO
[36] [37] [45] [68] OWASP Top 10 for Large Language Model Applications
https://owasp.org/www-project-top-10-for-large-language-model-applications/?utm_source=chatgpt.com
[42] What is Privileged Identity Management? - Microsoft Entra ...
[44] Lease, Renew, and Revoke | Vault
https://developer.hashicorp.com/vault/docs/concepts/lease?utm_source=chatgpt.com
[46] Configure Just in Time access to Windows machines
[55] [56] Azure Container Apps
https://azure.microsoft.com/en-us/products/container-apps?utm_source=chatgpt.com
[57] Pricing - Azure Database for PostgreSQL Flexible Server
https://azure.microsoft.com/en-us/pricing/details/postgresql/flexible-server/?utm_source=chatgpt.com
[60] API Pricing
https://openai.com/api/pricing/?utm_source=chatgpt.com
[62] SPIFFE Concepts
https://spiffe.io/docs/latest/spiffe-about/spiffe-concepts/?utm_source=chatgpt.com