Confluent has announced new Confluent Cloud capabilities that give customers confidence that their data is trustworthy and can be easily processed and securely shared. With Data Quality Rules, an expansion of the Stream Governance suite, organizations can easily resolve data quality issues so data can be relied on for making business-critical decisions. In addition, Confluent’s new Custom Connectors, Stream Sharing, the Kora Engine, and early access program for managed Apache Flink make it easier for companies to gain insights from their data on one platform, reducing operational burdens and ensuring industry-leading performance.
“Real-time data is the lifeblood of every organization, but it’s extremely challenging to manage data coming from different sources in real time and guarantee that it’s trustworthy,” said Shaun Clowes, Chief Product Officer at Confluent. “As a result, many organizations build a patchwork of solutions plagued with silos and business inefficiencies. Confluent Cloud’s new capabilities fix these issues by providing an easy path to ensuring trusted data can be shared with the right people in the right formats.”
Having high-quality data that can be quickly shared between teams, customers, and partners helps businesses make decisions faster. However, this is a challenge many companies face when dealing with highly distributed open source infrastructure like Apache Kafka. According to Confluent’s new 2023 Data Streaming Report, 72% of IT leaders cite the inconsistent use of integration methods and standards as a challenge or major hurdle to their data streaming infrastructure. Today’s announcement addresses these challenges with the following capabilities:
Data Quality Rules bolsters Confluent’s Stream Governance suite to further ensure trustworthy data Data contracts are formal agreements between upstream and downstream components around the structure and semantics of data that is in motion. One critical component of enforcing data contracts is rules or policies that ensure data streams are high-quality, fit for consumption, and resilient to schema evolution over time.
To address the need for more comprehensive data contracts, Confluent’s Data Quality Rules, a new feature in Stream Governance, enable organizations to deliver trusted, high-quality data streams across the organization using customizable rules that ensure data integrity and compatibility. With Data Quality Rules, schemas stored in Schema Registry can now be augmented with several types of rules so teams can:
- Ensure high data integrity by validating and constraining the values of individual fields within a data stream.
- Quickly resolve data quality issues with customizable follow-up actions on incompatible messages.
- Simplify schema evolution using migration rules to transform messages from one data format to another.
“High levels of data quality and trust improves business outcomes, and this is especially important for data streaming where analytics, decisions, and actions are triggered in real time,” said Stewart Bond, VP of Data Intelligence and Integration Software at IDC. “We found that customer satisfaction benefits the most from high quality data. And, when there is a lack of trust caused by low quality data, operational costs are hit the hardest. Capabilities like Data Quality Rules help organizations ensure data streams can be trusted by validating their integrity and quickly resolving quality issues.”
Custom Connectors enable any Kafka connector to run on Confluent Cloud without infrastructure management Many organizations have unique data architectures and need to build their own connectors to integrate their homegrown data systems and custom applications to Apache Kafka. However, these custom-built connectors then need to be self-managed, requiring manual provisioning, upgrading, and monitoring, taking away valuable time and resources from other business-critical activities. By expanding Confluent’s Connector ecosystem, Custom Connectors allow teams to:
- Quickly connect to any data system using the team’s own Kafka Connect plugins without code changes.
- Ensure high availability and performance using logs and metrics to monitor the health of team’s connectors and workers.
- Eliminate the operational burden of provisioning and perpetually managing low-level connector infrastructure.
“To provide accurate and current data across the Trimble Platform, it requires streaming data pipelines that connect our internal services and data systems across the globe,” said Graham Garvin, Product Manager at Trimble. “Custom Connectors will allow us to quickly bridge our in-house event service and Kafka without setting up and managing the underlying connector infrastructure. We will be able to easily upload our custom-built connectors to seamlessly stream data into Confluent and shift our focus to higher-value activities.”
Confluent’s new Custom Connectors are available on AWS in select regions. Support for additional regions and other cloud providers will be available in the future.
Stream Sharing facilitates easy data sharing with enterprise-grade security No organization exists in isolation. For businesses doing activities such as inventory management, deliveries, and financial trading, they need to constantly exchange real-time data internally and externally across their ecosystem to make informed decisions, build seamless customer experiences, and improve operations. Today, many organizations still rely on flat file transmissions or polling APIs for data exchange, resulting in data delays, security risks, and extra integration complexities. Confluent’s Stream Sharing provides the easiest and safest alternative to share streaming data across organizations. Using Stream Sharing, teams can:
- Easily exchange real-time data without delays directly from Confluent to any Kafka client.
- Safely share and protect your data with robust authenticated sharing, access management, and layered encryption controls.
- Trust the quality and compatibility of shared data by enforcing consistent schemas across users, teams, and organizations.