Black and white photo of three corporate people discussing with a view of the city's buildings

Current 2023 Announcements

Blog Summary: (AI Summaries by Summarizes)
  • Confluent announced the addition of serverless Flink, expanding their moats to three, alongside replication and Confluent Cloud.
  • Confluent's marketing tends to oversimplify complex topics like the universal use of Kafka and streaming.
  • Kafka is not universally suitable, requiring a clear need for streaming over batch processing.
  • While Confluent Cloud simplifies Kafka operations, it doesn't address architectural or development challenges.
  • Flink is positioned as a universal solution, but its value proposition as a serverless service may not be clear to all organizations.

Confluent had their Current Conference (Videos: day one and day two). There were many announcements that both technologists and investors need to know about.

Confluent had two moats (replication and Confluent Cloud), and now they anticipate three moats (replication, Confluent Cloud, serverless Flink).

As expected with a vendor conference, there is a lot of marketing from the stage. Confluent’s marketing likes to oversimplify things that have more nuance. A few examples:

  • Everything should use Kafka – No, it shouldn’t. I’ve hit clients who’ve bitten into this messaging and had problems. Kafka isn’t for everyone and everything.
  • Everything should be streaming – No, it shouldn’t. Batch has existed for too long to say it isn’t necessary. There should be a clear and compelling need for streaming to deal with its various downsides.
  • Kafka is easy – It’s not that simple. Confluent Cloud makes Kafka operations easier but doesn’t change the difficulty for architecture or development. It doesn’t fix organizational problems.
  • Flink for everything – It’s not that simple. See below.

In a previous post, I made some predictions about Confluent’s changes in positioning. For Confluent, this is a significant shift. The talks went from ksqlDB and Kafka Streams to Flink. I remember ksqlDB getting mentioned once or twice during the keynotes. I think the writing on the wall is clear for ksqlDB and remains unclear for Kafka Streams.

There was so much focus that Kafka would solve all the data quality problems. Data quality problems are solved by more than just technology. My experience is the root cause is organizational and manifesting technically. Putting Kafka in place will cause the data quality issues to continue or manifest differently.

Hearing the focus on data products in the various keynotes was good. I think this is one of the keys to our industry creating more value. However, data products aren’t just exposed on Kafka, as the keynotes tried to make it seem. Kafka is a way to move data around, but that is just one part of a data product. Data products need to be exposed with the right technologies for the various use cases.

Roadmap

A great deal of the keynotes covered Confluent and Kafka’s roadmap.

So much of Kafka’s roadmap is a copy of the features already available in Pulsar. Tiered storage? Already in Pulsar. Queues? Already in Pulsar. Directories? Already in Pulsar. It’s still surprising to see companies waiting on features that are already available and production-ready in Pulsar.

As we look at these changes, I always ask myself, “Can Kafka handle these changes?” In another post, I introduced a concept from Justin Coffey about how much systems can be changed from their original design. In Kafka’s case, many of these changes were never considered. Can Kafka’s codebase and design change this much without other issues popping up? Clever designs can only cover up so many problems.

As I look over these changes, I also ask, “Why now? These features have been missing since the beginning. What has changed?” I don’t have an answer yet. It could be something to do with engineering resources. In 2022, Confluent spent 264 million on research and development. I’ve found Confluent’s output relative to R&D spending to be low.

I found the coverage of Kafka’s protocol interesting during a keynote. First, hearing about binary protocol on the stage was interesting. Second, Kafka’s protocol will live longer than Kafka in my opinion. We see other vendors using the protocol, such as Kafka on Pulsar, Redpanda, etc. The technologies speak Kafka’s binary protocol, removing the need for any client-side changes. I wondered if the protocol changes will be used as a tool to keep others out. I’ve seen other open source companies use strategies like this to make it more difficult for their competitors to re-implement or keep up with the line protocol changes. The sales team then uses this to their advantage when customers ask about competitors.

Speaking of competition, some announcements didn’t specifically mention how open or closed the codebase is. For example, is the data quality open source or closed source? Teams must know this, or you’ll have vendor lock-in on an ostensibly open-source system.

Flink

The most significant change this year was Confluent’s acquisition of Immerkok. In their filing, Confluent bought the company for $54.9 million in cash and another $52.3 million in cash for key employee retention.

The keynote marketing was clear, “Use Flink for everything.” While Flink is a great technology, it isn’t for every use case. There is an inherent overhead to using it. Confluent announced a managed Flink offering. Like all managed services, that makes the operations easier and doesn’t change the programmatic or architectural difficulties. You’ll need to have a compelling need for all of the power Flink gives you.

They mentioned using Flink for batch. Flink’s batch processing is improving, and there are some caveats to know.

Flink is an excellent piece of technology. It seems like Confluent is trying to position its value proposition on its Flink service being serverless. (A big note that wasn’t clear is that their Flink offering is cloud-only.) Depending on the size of the organization, this can make a difference. A smaller organization doesn’t need an entire cluster, and that will lead to extra costs. I don’t think larger organizations will realize cost savings or operational differences with another Flink offering. To be clear, there are other Flink offerings out there that are managed services.

Queues

Queues are often used with pub/sub systems like Kafka. As of now, Kafka makes for a terrible queuing system. In fact, I winced several times during the keynote of people talking about using Kafka as a queue. It’s just so error-prone and heavy workarounds not to be viable.

Confluent has realized this and created KIP 932 to add queue support to Kafka. I’ve read through the proposal, which requires client and broker code changes. I’ve seen other limitations in Kafka worked around with client-side hacks. I initially worried they’d try to handle it on the client side, which wouldn’t work.

The big questions in my mind remain. How well can a system that wasn’t designed for queues handle them once added? Will there be performance issues or weird limitations? These aren’t trivial questions, as queues and the work done in them are essential. Missing one of the tasks items of work due to a bug could cause significant problems. I wouldn’t want to run production use cases on code that didn’t have a lot of miles on it.

The big questions in my mind remain. How well can a system that wasn’t designed for queues handle them once added? Will there be performance issues or weird limitations? These aren’t trivial questions, as queues and the work done in them are essential.

An essential part of the queue is the error handling, specifically dead letter queues. Kafka doesn’t have this feature built-in. I didn’t see specifics in the KIP about handling that.

Future Directions

The second day ended with a consideration of Kafka’s future directions.

The first was directories. It’s been a day one annoyance that Kafka has a single namespace where every topic is in the same place. As topics become more numerous, keeping track of them becomes a problem. Namespaces are a feature that’s been in Pulsar for a while.

They want to remove the need to know or deal with partitions in Kafka. Removing partitions has been a desire in other pub/subs too. It will be difficult as so much of Kakfa is partition-based. I’ve always taught that people should think of the key as the smallest unit of order instead of the technically correct partition-based thinking. I’ve seen many designs that weren’t doing that and designed around partition-based ordering and grouping.

The discussion rounded out with KIP 939’s participation in a two-phase commit. As dull as this sounds, this is a massive weakness in Kafka’s transactions. When saving data to another data store, Kafka’s transactions don’t know or allow for knowledge of the other systems’ success. This lack of coordination can cause one system to commit the transaction and the other system to fail. Data becomes out-of-sync, and bad things happen. Implementing this KIP will allow systems to integrate their commits of transactions tightly.

Convertible Senior Notes

It wasn’t covered at Current, but there is a part of Confluent’s filings that customers and investors should know about. The Q2 10-Q filing discusses how, in December 2021, Confluent issued $1.1 billion convertible senior notes at a 0% interest rate due 2027. The notes come due on October 15, 2026.

I haven’t seen any coverage of this in other articles. This note is a crucial part to consider, as stated in the filing:

We have funded our operations since inception primarily through equity and debt financings and sales of our offering. […] Additional financing may not be available on terms favorable to us, if at all, particularly during times of market volatility and general economic instability. […] If we incur additional debt, the debt holders, together with holders of our outstanding convertible notes, would have rights senior to holders of common stock to make claims on our assets, and the terms of any future debt could restrict our operations, including our ability to pay dividends on our common stock. Furthermore, if we issue additional equity securities, including through future issuances of equity-linked or derivative securities, our existing stockholders could experience further dilution, and the new equity securities could have rights senior to those of our common stock. Because our decision to issue securities in the future will depend on numerous considerations, including factors beyond our control, we cannot predict or estimate the amount, timing, or nature of any future issuances of debt or equity securities. As a result, our stockholders bear the risk of future issuances of debt or equity securities reducing the value of our Class A common stock and diluting their interests.

Frequently Asked Questions (AI FAQ by Summarizes)

What were the significant announcements made by Confluent at their recent conference?

Confluent made significant announcements at their recent conference that are important for both technologists and investors.

How many moats does Confluent anticipate having with the addition of serverless Flink?

Confluent had two moats before (replication and Confluent Cloud) and now anticipates three moats with the addition of serverless Flink.

Is Kafka suitable for every use case?

Kafka is not suitable for every use case, and there should be a clear need for streaming over batch processing.

What does Confluent Cloud address in terms of Kafka operations?

While Confluent Cloud makes Kafka operations easier, it doesn't address architectural or development difficulties.

What were the significant developments at the conference related to Confluent's acquisition and the use of Flink for batch processing?

Confluent's acquisition of Immerkok and the emphasis on using Flink for batch processing were significant developments at the conference.

What were the key points regarding Confluent's recent financial activities?

Confluent issued $1.1 billion convertible senior notes at a 0% interest rate due in 2027 and has primarily funded its operations through equity and debt financing.

What risks do stockholders face in relation to future debt or equity issuances by Confluent?

Stockholders bear the risk of future issuances reducing the value of Class A common stock, and decisions on future securities issuances depend on numerous considerations beyond Confluent's control.

Related Posts

The Difference Between Learning and Doing

Blog Summary: (AI Summaries by Summarizes)Learning options trading involves data and programming but is not as technical as data engineering or software engineering.Different types of

The Data Discovery Team

Blog Summary: (AI Summaries by Summarizes)Data discovery team plays a crucial role in searching for data in the IT landscape.Data discovery team must make data