Skip to main content

JIT Agentic Broker

0. Portfolio verdict

  • Portfolio strength: Strong
  • Role fit: High
  • One-sentence verdict: This is strongest as a two-part portfolio package: a serious runtime authorization control plane plus a tiny simulation lab that proves the broker works end to end.
  • Biggest risk: Letting the simulation harness bloat into a second product and muddying what the actual portfolio centerpiece is.
  • Best repositioning move: Present the work as one integrated system with two repos: the main Just-in-Time Access Broker for AI Agents, and a deliberately small Agent Access Simulation Lab used only to demonstrate and test the broker.

I. Why this is worth building

AI agents are increasingly being asked to read internal docs, query data, and trigger operational actions, but most access systems still assume either humans or static service accounts. That leaves platform and security teams with a bad choice: overprovision agents with broad long-lived credentials, or block useful automation entirely. A just-in-time access broker solves the real problem by making access decisions at request time, routing risky actions for approval, and issuing short-lived scoped credentials only when needed. Pairing that broker with a small simulation lab makes the project much stronger, because it turns an abstract security idea into a concrete, demoable end-to-end system instead of a policy engine floating in space.

Primary user/customer: Security engineering, IAM, and platform teams deploying internal AI agents

Why this is attractive for AI Engineer and Backend Engineer hiring managers: It shows backend system design, security judgment, workflow orchestration, token-based authorization, failure handling, and disciplined AI usage in explanations and risk summaries rather than fake “AI everywhere” product dressing.

II. Core concept / Differentiator

The core differentiator is that this is not just a proxy in front of traffic. It is a runtime access decision and token provisioning system for AI agents. The broker evaluates each request using the agent identity, requested action, target resource, and visible policy rules, then returns one of three outcomes: auto-approve, require approval, or deny. If approved, it provisions a short-lived scoped token that the protected API must enforce. Agent identity is anchored in Microsoft Entra workload identities, while the broker remains the runtime authorization authority. The simulation lab exists only to generate believable agent traffic and prove that the broker works in a live end-to-end flow. Microsoft defines workload identities as identities assigned to software workloads such as applications, services, scripts, or containers, and positions workload identity federation as a way to access Microsoft Entra-protected resources without managing secrets.

oaicite:0

Enabling principle/technology: Just-in-time access, scoped short-lived tokens, policy-as-code, approval workflow orchestration, replayable audit trails, Microsoft Entra Workload ID, LLM-assisted explanation

Why this is not just another generic project: Most portfolio security projects stop at auth or RBAC. This one focuses on runtime access brokering for AI agents, shows visible policy logic, uses Microsoft Entra workload identity as the trusted authentication root, proves downstream token enforcement, and demonstrates all three real decision paths instead of one happy-path-only demo.

oaicite:1

III. High-level system design

[Admin / Approver / Demo User] ──(browser)──► [Broker UI + Demo UI]


[Just-in-Time Access Broker]

├───────────────┼───────────────┼───────────────┤
▼ ▼ ▼ ▼
[Policy + Approval] [LLM Explain Service] [Postgres + Redis] [Token Issuer]


[Agent Simulation Lab + Mock Protected APIs]

The Just-in-Time Access Broker is the main backend product. It receives structured access requests from simulated agents, verifies a Microsoft Entra workload identity claim, maps that external workload identity to an internal agent profile, evaluates deterministic policy, decides whether to auto-approve, require human approval, or deny, and issues short-lived scoped broker tokens only for approved requests. Microsoft Entra Workload ID covers applications, service principals, and managed identities, which makes it a credible foundation for non-human identity in this project.

oaicite:2
The broker UI is for admins and approvers to manage policies, review pending requests, and inspect audit history, while the demo UI simply triggers prebuilt scenarios. The simulation lab contains tiny HTTP microservices representing agents and protected resource APIs; it is intentionally narrow and exists to exercise the broker, not compete with it. For AI Engineer fit, the LLM stays out of the final authorization decision and instead generates structured explanations, concise risk summaries, or approval rationale drafts. For Backend Engineer fit, the strongest depth is in request lifecycle design, identity mapping, approval state transitions, idempotent provisioning, token scoping, auditability, fail-closed behavior, and safe operation when the explanation service is unavailable. The key trade-off is clarity over breadth: a focused control-plane product with one strong ecosystem is much stronger than a fake enterprise IAM suite with ten half-built modules.

IV. Functional requirements

• Security admin can register internal agent profiles and bind them to trusted Microsoft Entra workload identities so that agent requests can be evaluated against known trust boundaries
• Security admin can define visible policies for action, resource, sensitivity, and approval requirement so that low-risk requests can auto-approve and risky ones can escalate
• Simulated agent service can authenticate with a Microsoft Entra workload identity and submit a structured access request for one action on one resource so that the broker can evaluate runtime intent
• System can verify workload identity issuer and subject, map them to an internal agent profile, and reject unknown identities before policy evaluation so that only trusted workloads enter the flow
• System can classify each request as approved, pending approval, or denied based on deterministic policy rules with stored reasoning
• Approver can review pending requests with request details, matched policy, external workload identity, and machine-generated explanation so that human approval remains fast and understandable
• Approver can approve or deny a pending request with a reason so that the workflow is attributable and auditable
• System can provision a short-lived scoped credential only after approval or eligible auto-approval so that agents do not hold standing access
• Protected resource API can validate issued tokens and required scope before serving data or executing an action so that the broker’s decision is actually enforced
• Admin can inspect a replayable audit trail of external identity verification, agent mapping, policy match, decision, approval action, token issuance, token use, and final outcome so that every step is traceable
• Demo user can trigger auto-approve, approval-required, and deny scenarios through a small UI so that the end-to-end broker flow can be shown in 3–5 minutes

V. Non-functional requirements

  • Policy evaluation should complete within about 300–500 ms for rule-only decisions
  • Workload identity verification and internal agent mapping should happen before policy evaluation and fail closed on verification errors
  • Approval-required requests should return a pending state quickly, without blocking on long-running UI or model work
  • Token issuance must be idempotent so duplicate approvals do not mint multiple active credentials
  • Issued tokens must be short-lived, resource-scoped, audience-bound, and rejected on expiry
  • The system must fail closed by denying provisioning or falling back to deterministic rule-only explanation if the LLM service is unavailable
  • Repeated requests with the same request ID must be handled idempotently
  • All request lifecycle events must be audit logged with request IDs and timestamps
  • The simulation lab should be lightweight enough to run locally via Docker Compose
  • Observability should include broker latency, pending approval counts, denied vs approved counts, token issuance failures, token-use traces, and identity verification failures

VI. Technical stack

  • Frontend: Next.js + TypeScript + Tailwind for the broker admin UI and minimal demo UI
  • Backend: FastAPI + Python for the broker API, token issuance service, and simulation microservices
  • AI / model layer: OpenAI or Anthropic API for structured explanation and risk summary generation only; never used as the final authorization authority
  • Database / storage: PostgreSQL for broker state, approval records, workflow-linked request data, and replayable audit events
  • Cache / ephemeral state: Redis for idempotency keys, short-lived request caching, rate limiting, and lightweight broker-side coordination
  • Authentication & authorization:
    • Microsoft Entra Workload ID as the trusted external workload identity source
    • OPA / Rego as the deterministic policy decision engine for allow, pending-approval, and deny outcomes
    • Internal broker-issued JWTs for downstream protected resource authorization
  • Workflow orchestration: Temporal for durable approval-required flows, retry-safe state transitions, request expiry handling, and audit-friendly workflow execution
  • Infrastructure / deployment: Docker Compose locally; Azure Container Apps for broker, worker, and demo service deployment
  • Monitoring / observability: OpenTelemetry for traces and metrics, Sentry for error monitoring, and structured JSON logs for request lifecycle debugging
  • Testing / demo reliability: Playwright for end-to-end scenario playback across auto-approve, approval-required, and deny flows
  • Other critical pieces: Entra identity-to-agent mapping table in broker config or database; signed token validation in mock protected APIs; minimal metrics dashboard for approval counts, denial counts, token issuance, and workflow health

VII. Main system flow (critical happy path)

  1. A demo user clicks “Run Support Agent: Read KB” in the demo UI.
  2. The Support Agent microservice authenticates using its Microsoft Entra workload identity and receives an identity token or claim set.
  3. The Support Agent creates a structured access request and sends it to the Just-in-Time Access Broker with its workload identity context.
  4. The broker verifies the workload identity issuer and subject, then maps it to the internal support-agent profile.
  5. The broker evaluates the request against visible policy.
  6. The broker auto-approves the request because knowledge.read is low-risk and eligible for immediate access.
  7. The broker issues a short-lived token scoped to the Knowledge API and records the decision.
  8. The Support Agent uses the broker-issued token to call the mock Knowledge API.
  9. The Knowledge API validates the token signature, expiry, audience, and scope.
  10. The Knowledge API returns the requested mock article to the Support Agent.
  11. The demo UI shows the full trace: identity verified, agent mapped, policy matched, token issued, resource call succeeded, and audit event recorded.

VIII. MVP scope and build plan

Must-have v1 scope:

  • Broker service with internal agent registration, Entra workload identity mapping, policy evaluation, and access request handling
  • Visible deterministic policies with three outcomes: auto-approve, require approval, deny
  • Approval workflow with pending, approved, and denied states
  • Short-lived token issuance with resource-specific scope and expiry
  • Replayable audit log and request detail view
  • Three demo scenarios: Support Agent auto-approve, Reporting Agent approval-required, Ops Agent deny
  • Tiny simulation lab with 2–3 agent services and 2–3 mock protected APIs
  • Minimal demo UI for triggering scenarios and showing traces

Nice-to-have later:

  • LLM-generated explanation and risk summaries
  • Metrics dashboard for approval counts and token events
  • Policy simulator against historical requests
  • Emergency token revocation or kill switch
  • More realistic Entra token verification stub or federated identity demo path

What to cut to avoid overengineering:

  • Full identity governance features
  • Multi-step approval trees
  • Real enterprise SaaS connectors
  • General-purpose agent framework behavior
  • Support for AWS IAM roles or GCP workload identity in v1
  • Kubernetes, Kafka, or other heavy infra not needed for v1
  • More than 3 agent types or more than 3 protected API types

Estimated solo build scope: 4–6 weeks part-time

IX. Assumptions

  • The broker is the main portfolio project and the simulation lab is intentionally subordinate
  • The project uses Microsoft Entra Workload ID as the only external workload identity source in v1
  • The project uses three demo scenarios: auto-approve, approval-required, and deny
  • Simulated agents are lightweight HTTP microservices, not full autonomous agent frameworks
  • Protected resources are mock internal APIs, not real third-party integrations
  • The LLM is used only for assistive explanation and not as the final decision-maker
  • Single-tenant deployment is acceptable for v1