Tableflow makes it easier to convert streaming data to Apache Iceberg tables to feed data warehouses, data lakes and analytics engines.
Confluent, a data streaming pioneer, announced exciting new Confluent Cloud capabilities making it easier for customers to stream, connect, govern and process data for more seamless experiences and timely insights while keeping their data safe.
Confluent Tableflow easily transforms Apache Kafka topics and the associated schemas to Apache Iceberg tables with a single click to better supply data lakes and data warehouses. Confluent’s fully managed connectors have been enhanced with new secure networking paths and up to 50% lower throughput costs to enable more complete, safe and cost-effective integrations.
Stream Governance is now enabled by default across all regions with an improved SLA available for Schema Registry, making it easier to safely adjust and share data streams wherever they’re being used.
“The critical problem for modern companies is that operational and analytical estates must be highly connected, but are often built on point-to-point connections across dozens of tools,” said Shaun Clowes, Chief Product Officer at Confluent. “Businesses are left with a spaghetti mess of data that is painful to navigate and starves the business of real-time insights.”
Many organisations turn to Kafka as the standard for data streaming in the operational estate, and to Iceberg as the standard open table format for data sets in the analytical estate. Using Iceberg, companies can share data across teams and platforms while keeping tables updated as the data itself evolves.
“Open standards such as Apache Kafka and Apache Iceberg are popular choices for streaming data and managing data in tables for analytics engines,” said Stewart Bond, Vice President of Data Intelligence and Integration Software at IDC. “However, there are still challenges for integrating real-time data across operational databases and analytics engines. Organisations should look for a solution that unifies the operational and analytical divide and manages the complexity of migrations, data formats and schemas.”
Tableflow makes it easier to feed data warehouses and data lakes for analytics
Tableflow, a new feature on Confluent Cloud, turns topics and schemas into Iceberg tables in one click to feed any data warehouse, data lake or analytics engine for real-time or batch processing use cases. Tableflow works together with the existing
capabilities of Confluent’s data streaming platform, including Stream Governance features and stream processing with Apache Flink, to unify the operational and analytical landscape.
Using Tableflow, customers can:
● Make Kafka topics available as Iceberg tables in a single click, along with any associated schemas
● Ensure fresh, up-to-date Iceberg tables are continuously updated with the latest streaming data from your enterprise and source systems
● Deliver high-quality data products by harnessing the power of the data streaming platform with Stream Governance and serverless Flink to clean, process or enrich data in-stream so that only high-quality data products land in your data lake
Tableflow is currently available as part of an early access program and will soon be available for all Confluent Cloud customers.