Select Page

What the EU’s “Apply AI” Plan Means for Startups – and How They Should Respond

Hero image

Introduction

On October 8, 2025 the European Commission unveiled a multi‑billion euro industrial initiative – widely reported as the “Apply AI” plan – aimed at accelerating adoption of AI across healthcare, energy, manufacturing, autos and defense while cutting dependence on non‑European cloud and chip stacks. The program blends R&D, procurement, pilot deployments, and incentives for sovereign compute and software. For founders, investors and product leaders in Europe (and those selling into it), this is a strategic inflection point.

This post breaks down what Apply AI actually changes, who wins and loses, and specific, practical moves startups should make in the next 3–18 months.

What Apply AI changes – the headline effects

  • A shift from pure research grants to large, mission‑driven procurement and deployment budgets. Instead of just funding labs, the EU is explicitly paying for systems to be adopted inside hospitals, factories, energy grids and defense suppliers.
  • A sovereignty and supply‑chain play: money and rules are structured to favor European stacks (software, data platforms, and on‑prem or Europe‑based cloud/compute), and to reduce reliance on U.S. and Chinese providers.
  • Stronger emphasis on regulated verticals where provenance, explainability and local data access matter (healthcare, critical infrastructure, defense). These are areas with higher willingness to pay for certified vendor solutions.
  • Longer procurement cycles but larger contract sizes. Governments and large incumbents buy slowly – but once they buy, deals tend to be strategic and long term.

Together, those effects change incentives across the ecosystem: investors will prize compliance and go‑to‑market playbooks that target public and regulated sector procurement; engineering teams will prioritize deployability and provenance, not just model accuracy.

Why this matters to startups (win conditions)

  • Procurement as a growth channel: Startups that can meet certification, security and data residency requirements can access large, repeatable EU government and enterprise contracts that are otherwise locked to major incumbents.
  • A premium on explainability and provenance: Buyers in regulated sectors will pay more for systems that provide auditable lineage of training data, model versions, and decision logic.
  • Local partnerships matter: Success will depend on alliances with European cloud providers, systems integrators, industrial OEMs and (for defense-oriented tech) national champions.
  • Reduced platform lock‑in wins: Being cloud‑agnostic or offering hybrid deployments (on‑prem + EU cloud) becomes a competitive advantage.

What’s harder (risks and friction)

  • Capital intensity: Building and certifying systems for regulated industries and on‑prem deployments costs more than web or consumer apps.
  • Sales complexity: Expect slow timelines, long procurement processes, and significant customization work for big customers.
  • Competitive response: U.S. and Chinese cloud and AI vendors will still compete aggressively; the EU plan lowers barriers but doesn’t shut others out.
  • Talent and compute constraints: Sovereignty focuses may raise demand for EU compute capacity and specialized talent, pushing up local costs.

Concrete steps for startups (3–18 month checklist)

  1. Map target verticals to procurement levers
  2. Identify EU funds, regional procurements, and industry pilots that match your product (e.g., hospital networks, grid operators, industrial automation programs).
  3. Prioritize a small set of tenders and agencies where you can be genuinely competitive.

  4. Harden compliance and provenance

  5. Invest in auditable data lineage, model versioning and toolchains that produce explainability artifacts. Buyers will ask for them early.
  6. Start SOC2/GDPR/ISO assessments and document processes for data residency and consent.

  7. Build local compute and cloud partnerships

  8. Negotiate integrations or reseller agreements with Europe‑based cloud providers and data centers. Support hybrid deployments.
  9. If you rely on large LLM providers, ensure contractual terms permit deployment patterns European buyers require.

  10. Rework GTM for long sales cycles

  11. Staff or contract for public sector sales and capture management. Expect longer deals but higher lifetime value.
  12. Offer pilot programs that are clearly scoped, time‑boxed, and designed to produce procurement‑grade artifacts.

  13. Signal credibility early

  14. Publish whitepapers, compliance summaries, and third‑party audits that matter to government buyers.
  15. Get onto industry‑specific frameworks (healthcare certs, energy operator registries) even in beta.

  16. Consider financing that matches the strategy

  17. Investors who understand public procurement and deep tech are better partners than pure growth‑at‑all‑cost VCs. Plan for a potentially higher burn to reach deployable product readiness.

Implications for non‑European startups and global players

Non‑EU firms should not treat Apply AI as a protectionist wall. Instead:
– Offer EU‑resident deployments or carve out EU data zones; form local partnerships or subsidiaries.
– Focus on interoperability and standards compliance that let your product slot into European vendor stacks.
– Collaborate with local systems integrators to meet procurement rules and cultural expectations.

Large U.S. and Chinese cloud/AI vendors will continue to compete – but the EU plan pressures them to offer European‑resident variants and stronger guarantees around data and provenance.

A realistic timeline

  • 0–6 months: Map procurements, start compliance gaps fixing, pursue pilot conversations with public agencies and large regulated customers.
  • 6–12 months: Execute pilots, get first procurement wins, formalize compute/cloud partnerships and certifications.
  • 12–24 months: Scale deployments, move into multi‑year contracts, and use government references to expand into adjacent EU markets.

Conclusion

Apply AI is not just another grant program – it reorients funding toward real deployments, sovereignty, and regulated sectors. That creates a distinct opportunity for startups that can invest in compliance, provenance, hybrid deployment capability, and the longer sales cycles of public‑sector and industrial customers. The tradeoff is clear: slower, costlier productization up front for stronger, strategic contracts later.

For founders: pick your target vertical, prove a short, tightly scoped pilot that demonstrates auditable outcomes, and lock in local compute and systems partners. For investors: expect different KPIs – longer time‑to‑revenue but potentially larger, stickier contracts.

Key Takeaways
– The EU’s Apply AI program is a multi‑billion euro industrial push prioritizing sovereignty, procurement, and regulated sectors – it favors startups that align to compliance, industry partnerships and on‑prem/cloud neutrality.
– Startups should pursue EU procurement early, build for interoperability and provenance, forge local cloud/compute and defense partnerships, and plan for longer sales cycles but bigger strategic deals.

Browser‑Driving Agentic AI: Why ‘Computer Use’ Changes Enterprise Automation

Hero image

Introduction

Agentic AI that can actually control a browser – typing into forms, clicking buttons, navigating complex web apps and dragging UI elements – is moving from lab demos to product launches. Recent industry work (notably Google’s “Computer Use” agent and enterprise features from Microsoft and others) shows these models can solve long‑tail, brittle automation problems that earlier API‑only automations struggled with.

This post explains why browser‑driving agents matter, what changes for engineering and security teams, and a practical pilot roadmap for enterprises that want to adopt them responsibly.

Section Header

What makes browser‑driving agents different?

  • Surface vs. API automation: Traditional automation integrates with stable APIs or uses RPA (recorded flows). Browser‑driving agents operate at the UI surface, letting them automate apps without developer‑facing APIs.
  • Contextual reasoning plus action: These agents combine language understanding with stepwise actions (e.g., identify a field, compute the right input, paste or click), enabling multi‑step workflows that adapt to dynamic pages.
  • Unattended operation: Agents can run long sequences autonomously, orchestrating multiple tabs, services, and agents-this increases value but also raises safety and monitoring needs.

Why now? Improvements in model grounding, multimodal context, and integrations that let models observe DOM structure or accessibility metadata have made UI actions reliable enough for production trials. Hardware and infrastructure growth (rising GPU demand) also reduce latency and cost for running these agents at scale.

Risks and operational challenges

  • Credential and data exposure: Agents need access to logged‑in sessions and sometimes secrets. That expands the blast radius if an agent is compromised or misbehaves.
  • Rights and provenance: Generative outputs that interact with copyrighted content or produce derivative assets raise IP and rights management questions (see recent generative video platform controversies).
  • Drift and brittleness: UIs change. Without strong observability, agent workflows can silently fail or take harmful actions.
  • Unintended actions and safety: Autonomous agents may escalate privileges, submit incorrect transactions, or leak PII if goal specifications are ambiguous.

Engineering and governance patterns that work

1) Least privilege and ephemeral credentials
– Use session‑scoped tokens, short‑lived credentials, and browser sandboxing. Bind agent permissions tightly (read vs. write) and separate browsing-only agents from ones that can submit transactions.

2) Action mediation and human‑in‑the‑loop gates
– For high‑risk operations (financial transfers, publishing), require a human confirmation step. Log suggested actions and provide an approval UI.

3) Observability and behavioral contracts
– Record action traces (DOM snapshots, timestamps, model prompts) and establish SLIs for action success rates, latency, and anomalous behaviors.

4) Rights management and watermarking
– Track provenance for content the agent reads and produces. Implement policies that check for protected content before downstream use and surface licensing requirements to decision makers.

5) Test harnesses that simulate UI changes
– Add mutation tests that randomly alter DOM structure in staging to catch brittle selectors or brittle instruction parsing.

Enterprise adoption roadmap (90‑day pilot)

  • Week 0–2: Inventory
  • Identify 3 candidate workflows: one low‑risk (report generation), one medium‑risk (CRM updates), one high‑value but higher‑risk (order placement).
  • Classify data sensitivity and required permissions.

  • Week 3–6: Build a constrained pilot

  • Implement the low‑risk workflow with strict credential scoping and full activity logging.
  • Add human approval for any write actions.

  • Week 7–10: Hardening and monitoring

  • Add mutation tests, anomaly detectors, and SSO/credential rotation.
  • Define escalation paths and incident reporting templates aligned with regulatory obligations (e.g., EU/California transparency and incident rules).

  • Week 11–12: Evaluation and scale decision

  • Review success metrics (time saved, error rate, security incidents). If green, plan phased rollout with clear SLAs and governance.

Regulatory and policy implications

Agentic UI automation touches several compliance domains: data protection, consumer safety, and IP. Expect regulators to require: documented purpose and scope, incident reporting for serious harms, and transparency about automated actions when interacting with end users. A unified incident and transparency playbook (covering audit trails, reporting templates, and remediation steps) will simplify cross‑jurisdictional compliance.

Practical examples where agents add immediate value

  • Sales ops: Auto‑reconciling leads between ad platforms and CRM when mappings are inconsistent or connectors fail.
  • HR onboarding: Completing multi‑step forms across internal portals that lack a single API.
  • Competitive intelligence: Periodic extraction from complex dashboards that resist API scraping.

When not to use them

  • High‑value financial transfers without redundant human checks.
  • Systems requiring absolute repeatability and certificate‑based authentication where agent tooling cannot meet auditability requirements.

Conclusion

Browser‑driving agentic AI unlocks a new class of automation: adaptive, UI‑level orchestration that can integrate across legacy apps without engineering new APIs. That capability brings big wins in productivity and flexibility but also new security, rights, and compliance responsibilities.

Start small: pilot low‑risk workflows, build tight permissioning and observability, require human approval for high‑risk actions, and prepare incident reporting procedures. With thoughtful engineering and governance, enterprises can harness agentic UI automation safely and effectively.

Key Takeaways

October 2025: The AI Inflection – Agents, Chips, and the New Geopolitics of Models

October 2025: The AI Inflection – Agents, Chips, and the New Geopolitics of Models

How agentic assistants, EU investment, and massive compute commitments reshaped the AI landscape this week

Hero image

Introduction

The first week of October 2025 felt like an inflection point for applied AI. A cluster of developments – public funding commitments from the EU, new on-screen agent capabilities from Google, corporate moves in India, and high-profile compute-buying whispers and wins – showed that the field is shifting from model innovation to deployment, governance, and industrial strategy.

This post distills the week’s headlines and what they mean for product teams, infrastructure buyers, investors, and policymakers.

Why this week matters

A handful of themes tied the headlines together:

  • Agentic interfaces are becoming real. Google’s Gemini 2.5 “Computer Use” demonstrates models that not only generate text but take actions on-screen (typing, clicking, dragging). That’s a qualitative jump in utility – and risk – because agents now interact with UI state, user data, and third-party services.

  • Public funding + regulation is back. The EU’s new multi-hundred-million-euro push to “apply AI” to health, energy, auto, pharma, manufacturing and defense signals a shift from purely regulatory posture to active industrial policy. Money plus guardrails will accelerate real deployments inside Europe and change the competitive map.

  • Compute is the choke point. OpenAI’s large commitments (and market chatter about AMD/Nvidia supply deals and xAI’s capital raise) reinforce that whoever controls affordable, scalable accelerators and data-center capacity will shape which companies can train the next generation of frontier models.

  • Content governance is tightening. OpenAI’s Sora launch and rapid policy reversal – moving from opt-out to permission-required for rights-holders – shows creators, rightsholders, and platforms will actively contest how likenesses and copyrighted material are used in generative video and multimodal outputs.

  • Platformization of assistants is messy. Big demos (Booking, Canva, Coursera, Spotify, Zillow) didn’t immediately translate to partner stock moves. The hybrid business model – assistant-as-platform vs. assistant-as-feature – is still settling.

What product and engineering teams should watch

  1. UX & safety: Agents that interact with the screen demand new affordances – clear permissions, undo paths, and bounded-action sandboxes. Design for “explainable actions” (why the assistant clicked/typed) and easy rollbacks.

  2. Access to specialized compute: Expect longer procurement cycles, procurement-based vendor relationships, and possibly multi-cloud or hybrid strategies to avoid single-vendor lock-in. If your roadmap needs sustained model training or low-latency inference, start capacity conversations now.

  3. Compliance-by-design for generative content: With policies trending toward permission-first approaches for likeness and copyrighted media, build metadata provenance, opt-in flows for training data, and tooling to honor takedowns and licenses programmatically.

  4. Regulatory watch: EU funding programs will come with strings – procurement preferences, data residency, verifiability requirements, and auditability. If you plan to deploy in Europe, map your product and infra choices to likely compliance rules.

  5. Partnering tradeoffs: Integrations showcased at big vendor events create marketing value but not guaranteed revenue. Focus partner work on measurable user outcomes (retention, revenue per user) rather than demos alone.

The investment and competitive angle

Market moves this week indicated real money is following compute bets. AMD’s stock reaction after reported OpenAI-level commitments and rumors of xAI raising capital tied to Nvidia chips reflect investor attention to vendor capture. That suggests: (a) hardware vendors will play a larger strategic role, and (b) customers should evaluate long-term total cost of ownership (TCO) and supply risk when choosing chip partners.

For startups, this environment favors those that can 1) run efficiently on commodity or mixed hardware, 2) demonstrate clear vertical wins that justify specialized stacks, or 3) partner with cloud/hardware vendors for preferential access.

Conclusion

October’s headlines underscore a shift from raw model invention to applied, agentic systems governed by commerce, regulation, and infrastructure realities. Agents that act on behalf of users will unlock value – and new failure modes – while governments and hardware vendors will shape who can build and scale those systems.

The next 6–12 months will sort winners who combine safe, auditable agent UIs with resilient compute strategies and strong content governance.

Key Takeaways
– Agentic AI is moving from research demos to on-screen action (typing/clicking), making assistants materially more useful and raising new UX, safety, and platform questions.
– The EU’s €1B ‘Apply AI’ push signals national industrial strategy: public funding plus regulation to close the gap with U.S./China on applied AI in healthcare, energy, auto, pharma and defense.

Browser Agents Are Here: What Google’s ‘Computer Use’ Gemini Means for Enterprise Workflows

Browser Agents Are Here: What Google’s ‘Computer Use’ Gemini Means for Enterprise Workflows

How browser-native, on-screen agents change where AI can add real value – and the risks teams must plan for

Hero image

Introduction

Google’s unveiling of Gemini’s “Computer Use” – an agent that performs tasks by interacting with web pages rather than calling APIs – marks a practical inflection point for agentic AI. Instead of waiting for every app to add a model-backed integration, agents can operate inside browsers and automate multi-step workflows across legacy and modern web apps.

That capability is powerful: it means automation for the billions of enterprise workflows that live only in GUIs. But it also surfaces new security, reliability, and governance challenges that product managers, security teams, and IT leaders must address up front.

This post breaks down where browser-native agents add immediate value, the primary risks teams must mitigate (including recent “CometJacking” concerns), and an actionable checklist for deploying these agents responsibly.

Why browser-native agents matter: the productivity case

APIs and apps are getting smarter, but most enterprise work still happens across web UIs, spreadsheets, and legacy portals. Browser agents unlock value in three broad scenarios:

  • Cross-app orchestration without APIs – e.g., copy data from a legacy booking portal, reconcile in a spreadsheet, and submit a ticket in a modern ITSM tool.
  • Complex form completion and exception handling – agents can handle conditional navigation, field mappings, and retries when forms reject input.
  • Context-aware research and summarization – agents that browse multiple sources, extract relevant snippets, and assemble structured briefings for humans.

Practical characteristics of high-impact use cases:

  • Repetitive, rule-based steps with limited ambiguity
  • Stable UI patterns (pages that don’t change layout every week)
  • Clear success/failure criteria so automation can be monitored

For enterprises, this often translates to back-office tasks (procure-to-pay drudgery), customer ops workflows, and HR onboarding bottlenecks.

The new attack surface: CometJacking and hidden prompt risks

Agentic browsers don’t just introduce convenience – they introduce novel risks. Recent reporting (and a patched incident in an AI browser) highlighted how web content can attempt to manipulate agent behavior through hidden or obfuscated UI elements and prompts.

Key threat types:

  • UI-level prompt injection: malicious pages craft elements that agents interpret as instructions.
  • Hidden-interaction attacks (e.g., ‘CometJacking’ scenarios): pages trigger agent actions by exploiting on-screen controls or overlays.
  • Data exfiltration through chained browsing: agents that fill forms or copy data can be tricked to leak sensitive fields across domains.

Because these attacks operate at the presentation layer, traditional API-based security controls (rate limits, API keys) aren’t sufficient. Defenses must consider the browser agent’s view and decision model.

Governance and engineering controls

A layered approach works best:

  • Technical controls
  • Sandboxing: run agents in constrained browser contexts with strict domain allowlists.
  • Provenance & auditing: immutable logs of agent actions, inputs, and outputs (who approved, which model, which browser session).
  • Human-in-the-loop gates: require confirmation for high-risk actions (fund transfers, exporting PII).
  • Prompt sanitation & UI validation: filter and validate inputs derived from web pages before acting.

  • Operational controls

  • Supplier risk reviews: evaluate third-party agent providers for transparency on training data, update cadence, and incident response.
  • Use-case gating: pilot on low-risk workflows, measure ROI and failure modes, then expand.
  • Incident playbooks: exercise scenarios where agents misinterpret pages or exfiltrate data.

  • Policy & compliance

  • Data-handling rules: map which fields agents may read/write and how long transient copies persist.
  • Access control: tie agent capabilities to role-based approvals and least privilege.

Where agents will (and won’t) win in 2026

Short-term winners:

  • Internal automation teams focused on cost-savings from manual web processes.
  • Customer support triage that extracts case facts from multiple dashboards.
  • Sales ops where CRM, quoting tools, and contract portals lack integrated APIs.

Low-probability wins (for now):

  • High-risk decisions requiring nuanced judgment – these still need humans.
  • Highly volatile UI contexts where frequent front-end updates will break automations faster than they can be maintained.

Gartner and other analysts warn of an agentic AI supply/demand imbalance; a pragmatic posture – small, measurable pilots with tight governance – will separate durable wins from agent-washing.

Practical rollout checklist for product, security, and IT leaders

  1. Start with a 30–60 day pilot on a clearly scoped workflow (measure time saved, error rate).
  2. Implement a browser-level allowlist and sandbox for agent sessions.
  3. Require explicit human approval for actions touching money, PII, or legal documents.
  4. Enable detailed, tamper-evident action logs and regular audits.
  5. Threat-model the agent’s UI exposure: simulate prompt-injection and overlay attacks.
  6. Build a rollback/kill-switch integrated with your SIEM/incident processes.
  7. Reassess vendor risk and clarify contractual SLAs for model changes and security responsibilities.

Conclusion

Browser-native agents like Gemini’s Computer Use make a practical promise: automation that reaches workflows APIs never touched. That promise is real – but it brings new, browser-specific risks that teams must address before widescale adoption.

Treat this era like past platform shifts: pilot conservatively, bake in technical and operational guardrails, and prioritize use cases where predictable, multi-step UI tasks yield clear ROI. Do that, and browser agents can unlock substantial productivity across enterprise workflows – safely.

Key Takeaways
– Browser-native agentic AI can automate long-tail web workflows that lack APIs.
– Agentic browsers introduce new security risks (hidden prompt attacks, UI manipulation) that require browser-aware defenses.
– Prioritize predictable, rule-driven workflows for pilots and combine technical sandboxing with human-in-the-loop controls.

AgentKit and the Rise of Agentic AI: What Developers Need to Know

How OpenAI’s new tooling turns chat models into task-performing agents – products, pipelines, and pitfalls

Hero image

Introduction

Agentic AI – systems that act on users’ behalf to accomplish multi-step tasks – moved from research demos to mainstream product strategies in 2025. OpenAI’s recent launches, especially AgentKit and the introduction of Apps inside ChatGPT, formalize a path for developers to ship these agent experiences quickly. This post breaks down what AgentKit is, what problems it solves, how teams can use it, and the trade-offs you should plan for.

What is AgentKit (at a glance)?

AgentKit bundles opinionated tools for building, testing, and deploying AI agents. Instead of wiring together models, orchestration, webhooks, and UIs from scratch, AgentKit provides:

  • An agent builder/authoring layer to define goals, steps, and tool integrations.
  • SDKs and runtime components for running agents reliably and at scale.
  • Prebuilt connectors (and patterns) for common tools: calendars, file stores, browsing, enterprise apps.
  • Local testing and simulation features so you can validate behaviors before exposing agents to users.

Combined with ChatGPT’s new Apps model, developers can ship “chat-native” apps that operate as first-class integrations inside conversational surfaces.

Why this matters now

A few trends converged to push agentic tooling forward:

  • Models are better at planning, tool use, and long-form orchestration than a year earlier.
  • Product teams want automation that feels conversational – not just a form with macros.
  • Enterprises need repeatable patterns for safety, logging, and access control when agents touch internal systems.

AgentKit is an attempt to capture those patterns, lowering the friction from prototype to production.

How developers will likely use AgentKit

  1. Define capabilities, not just prompts

Instead of maintaining monolithic prompt templates, teams define agent capabilities (e.g., “book travel”, “submit expense”) and the sequence of tools and checks required. That makes behavior more auditable and modular.

  1. Plug in connectors for real systems

The value in agents is access: calendar APIs, CRMs, payment processors, file stores. AgentKit aims to provide reference connectors and safe patterns for calling them.

  1. Test with simulated users and failover logic

Agents must handle partial failures. Built-in simulation and step-level retry/compensating transactions are essential for reliability.

  1. Ship as Apps inside chat interfaces

With ChatGPT Apps, agents can be surfaced inside a conversational UI where users can hand-off tasks and check progress without switching context.

Product implications: UX and business models

  • New UI primitives: “delegate to an agent”, progress timelines, and intervenable automations replace simple one-shot chat replies.
  • Reduced friction for complex tasks could increase conversion for vertical apps (travel, recruiting, HR, procurement) by simplifying multi-step flows.
  • Distribution shifts: chat platforms can become the primary surface for third-party apps – changing how discovery and monetization work.

Safety, privacy and compliance – what to watch

Agentic systems intensify known risks:

  • Data surface expansion: agents access more internal data (calendars, emails, repos). That increases exposure and requires robust access controls, encryption, and audit trails.
  • Confident-but-wrong behavior: agents that act autonomously can amplify hallucinations into real-world actions. Design explicit human-in-the-loop gates for high-impact tasks.
  • Logging and retention: for debugging and compliance, you need detailed logs – but logs themselves are sensitive. Policy and engineering must balance observability with minimization.
  • Regional regulation: depending on where users or data live, agent behavior and data handling may need regional configs (EU AI Act, data residency rules).

Infrastructure and costs

Running agentic experiences often raises compute and latency needs because agents:

  • Perform multiple model calls per task (planning, verification, tool use).
  • May require stateful runtimes to track long-running jobs and user approvals.

Plan for higher inference costs, observability for chain-of-thought and tool calls, and backpressure handling when downstream APIs are slow.

Practical checklist for teams considering AgentKit

  • Start with a narrow, high-value workflow where mistakes are reversible.
  • Instrument every tool call and decision point for auditability.
  • Build explicit confirmation steps for actions that move money or change access.
  • Rate-limit and sandbox connectors during early rollout.
  • Maintain an off-ramp: a clear way for users to opt out and for operators to revoke agent capabilities.

Conclusion

AgentKit and the move to chat-native Apps lower the technical bar for delivering agentic AI, turning prototypes into products faster. That creates exciting possibilities for automation, but also concentrates responsibility: product, security, and infra teams must design for reliability, privacy, and regulatory compliance from day one.

Key Takeaways
– AgentKit lowers the friction for building agentic workflows by packaging orchestration, connectors, and developer UX into an opinionated toolkit.
– Agentic apps promise new product possibilities (chat-native automation, background assistants) but introduce fresh safety, privacy, and infra responsibilities.