SIGN IN SIGN UP
langflow-ai / langflow UNCLAIMED

Langflow is a powerful tool for building and deploying AI-powered agents and workflows.

0 0 1 Python

fix: Upgrade cuga to 0.2.20 to resolve playwright dependency conflict (#12703)

* fix: upgrade playwright to 1.58.0 to address Chromium CVEs

- Add playwright>=1.58.0 to override-dependencies in pyproject.toml
- Update uv.lock: playwright 1.49.0 -> 1.58.0, pyee 12.0.0 -> 13.0.1
- Fixes CVE-2026-2313, CVE-2026-2314, CVE-2026-2315, CVE-2026-2319,
  CVE-2026-2321, CVE-2026-2441, CVE-2026-2648, CVE-2026-2649
- Ensures Docker builds download updated Chromium with security patches

* fix: update npm to latest version to address brace-expansion CVE-2026-33750

- Add npm update after Node.js installation in Dockerfile
- Fixes CVE-2026-33750 in system npm's brace-expansion dependency
- System npm had brace-expansion 2.0.2, update gets 5.0.5+
- Low risk change: npm is backward compatible, only affects CLI tool

* revert: remove npm update from Dockerfile

- npm update attempts were causing CI build failures
- Bundled npm has issues but updating it is proving problematic
- Focus on playwright CVE fix which is the primary concern
- brace-expansion CVE-2026-33750 is lower priority (DoS only)

* chore: sync uv.lock files

sync uv.lock files

* fix(mcp): dedupe edges in connect_components (#12701)

* fix(mcp): make add_connection idempotent to avoid duplicate edges

connect_components used to append a new edge unconditionally. Because
the edge id is deterministic from source/target/handles, calling it for
a pair the flow already had wired up (UI-then-MCP, batch retry, or just
a repeat call) produced a second edge with the same id, double-wiring
the flow at runtime.

Before appending, scan the existing edges for one with the same id and
return that instead. Different outputs/inputs between the same pair
still produce distinct ids and remain supported.

* test(mcp): cover dedupe against UI-saved edges, broaden match key

Older Langflow UIs saved edges with an `xy-edge__` id prefix instead of
the current `reactflow__edge-`, so an id-based dedup would miss the
UI-then-MCP case for any flow that came from an older version. Switch
the existence check to a structural one (source, target, sourceHandle
name, targetHandle fieldName) so the same logical connection dedupes
regardless of id format.

Add a fixture-driven test that loads MemoryChatbotNoLLM.json (an
xy-edge-prefixed flow) and replays each connection through
add_connection, asserting the edge count does not grow.

* fix(mcp): validate_flow fast-fails and reports partial errors (#12697)

* fix(mcp): validate_flow fast-fails and reports partial errors

validate_flow polled /monitor/builds for up to 30 seconds waiting for
every component to finish before reporting errors. When a component
fails early (for example a missing required field), downstream
components never run, so the loop waited out the full window and
returned just "Build timed out: N/M components completed" with no
actionable context.

- Short-circuit as soon as any completed build reports valid: false;
  return those errors immediately instead of polling on.
- On timeout, include the errors from the builds that did complete
  plus a component_count so the caller can see progress.
- Extract _collect_build_errors so the poll loop and timeout branch
  share the same error shape.

* fix(mcp): stream validate_flow build inline instead of polling

The previous implementation triggered an async build and polled
/monitor/builds, which depended on FastAPI BackgroundTasks firing the
log_vertex_build calls after the trigger request had returned. Under
ASGI test transport these tasks never run, so /monitor/builds stayed
empty and validate_flow timed out with component_count=0.

Switch to event_delivery=direct so the build streams its events back
inside the same request:

- Drive the build via client.stream_post and aggregate per-vertex
  results from end_vertex events.
- Fast-fail on the first vertex with valid=false, since downstream
  vertices depend on it and would not produce useful information.
- Surface top-level error events as a single flow-level error.
- Replace _collect_build_errors with _extract_vertex_error, which
  reads the structured error payload from the end_vertex outputs.

Update the lfx unit tests to use the streaming shape and tighten the
backend integration test to assert real success now that the build
actually runs end to end under ASGI.

* fix(graph): make end_all_traces_in_context Python 3.10 compatible

The implementation called asyncio.create_task(coro, context=context),
but the context= keyword was added to create_task in Python 3.11. On
3.10 it raised TypeError. The bug was latent because nothing in the
test suite previously consumed a streaming build response far enough
for Starlette to dispatch the post-response BackgroundTasks where this
code lives. The validate_flow streaming change exposes it.

On 3.10, route the create_task call through context.run so the new
Task copies the captured context as its current context, matching the
isolation the 3.11 path provides via the context= kwarg.

Add a regression test that asserts end_all_traces sees the value of a
contextvar set before the context was captured, even after the caller
mutates that var.

* fix: failing wxo list llm test (#12700)

patch service layer and update failing test

* [autofix.ci] apply automated fixes

* [autofix.ci] apply automated fixes (attempt 2/3)

* Try to fix the missing typer import

---------

Co-authored-by: Janardan S Kavia <janardanskavia@Janardans-MacBook-Pro.local>
Co-authored-by: Adam Aghili <Adam.Aghili@ibm.com>
Co-authored-by: Gabriel Luiz Freitas Almeida <gabriel@logspace.ai>
Co-authored-by: Hamza Rashid <74062092+HzaRashid@users.noreply.github.com>
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Co-authored-by: Eric Hare <ericrhare@gmail.com>
J
Janardan Singh Kavia committed
38d142a72360187d077a5e31eae1ce920c84899b
Parent: 07cb95a
Committed by GitHub <noreply@github.com> on 4/14/2026, 9:23:12 PM