Qlik adds AI tools for data engineering pipeline work
Qlik has added new data engineering tools as part of its agentic execution strategy, extending its AI-focused approach beyond analytics into pipeline and data delivery.
The release includes declarative pipelines, real-time routing in Talend Studio, and native streaming in Qlik's Open Lakehouse. The changes are intended to reduce manual work for data teams and help them deliver more current data for analytics, automation, and AI systems.
Data engineering is becoming a pressure point for companies trying to support more AI projects without adding complexity or cost. In many organisations, engineers still spend significant time building pipelines, maintaining transformations, fixing issues, and keeping datasets current enough for operational use.
Qlik aims to address that with what it describes as a more intent-driven workflow. Its new declarative pipelines let engineers create and update pipelines through natural-language interactions within the pipeline canvas, while receiving guidance on next steps.
Another addition brings real-time message routing into Talend Studio for agentic data flows. This is intended to enable engineers to work with large language models, build retrieval-augmented generation pipelines tailored to specific domains, and connect agentic systems via model context protocol components.
Open Lakehouse Streaming is another part of the release. It adds native streaming support to Qlik's Open Lakehouse, allowing teams to combine continuous event data with batch and change data capture workloads within a single environment, rather than relying on separate tools.
Qlik also outlined an AI Assistant for Talend Studio, planned for later this year. The assistant is intended to help developers request help, generate jobs, create documentation, and write SQL using natural language inside the integrated development environment.
Engineering Shift
The announcement reflects a broader shift in how software suppliers are positioning AI tools for enterprise data teams. Rather than limiting AI features to coding prompts or analytics queries, vendors are increasingly targeting the operational steps behind data preparation, governance, and delivery.
Qlik presented the release as part of that move. The aim, it said, is not only to help write code but also to reduce friction in building, modifying, and operating pipelines in production environments where reliability and control remain critical.
"Most companies do not struggle to imagine AI use cases. They struggle to deliver the trusted, current data those use cases depend on," said Mike Capone, Chief Executive Officer, Qlik. "As demand rises, data engineering becomes the critical path. Qlik is helping teams reduce friction, protect trust, and keep pace with the business."
The combined changes are meant to support teams facing rising demand for AI-ready data while avoiding larger backlogs and repeated engineering work. Qlik also pointed to metadata, stewardship, and data quality as connected parts of the same process rather than separate tasks.
Customer View
The announcement also included feedback from a user involved in data development, focusing on the distinction between coding assistance and workflow support across the broader engineering process.
"There is a big difference between an assistant that helps write code and a system that actually helps a data team move faster end to end," said Robin Astle, Principal Developer, Valpak. "The interesting part of this announcement is the focus on pipeline creation, data quality, metadata, and stewardship together, because that is much closer to how real engineering work happens."
Qlik did not provide pricing. It said the new functions now available, along with planned additions, are intended to move data engineering away from manual pipeline assembly and toward a more agent-assisted operating model.
The company says it is used by 75% of the Fortune 500.