This event is limited to the first 100 data engineers to signup. Don't wait!
Arrow
March 26, 2026
Register now

All technical.
All data. All day.

March 26, 2026 | San Francisco

Create the future of data engineering

Explore with Industry Experts

We’re bringing together a team of data engineering leaders and practitioners to share what you need to know right now.

Engineered for Data Engineers

This is a purely technical event. No salespeople. No sponsors. This is all about the data. We pinky promise.

Connect with the Community

This is a single-track event capped at 100 data engineers and architects shaping where the industry is headed. We’re keeping it small, so that we can make sure that the “hallway track” is just as good as the talks we’re bringing in.

Event Info

When

March 26, 2026

10am - 4pm followed by happy hour

Where

L4 - 945 Market, San Francisco

945 Market St,
San Francisco, CA 94103

Agenda

Frequently asked questions

What is underCurrent: Data Engineering?

A one day, one-track event for data engineers, architects, and the best minds in data.

When and where is underCurrent: Data Engineering taking place?

underCurrent: Data Engineering is taking place on March 26, 2026 at L4 in San Francisco, CA,

How does this event relate to Current?

underCurrent is a more focused experience created to enrich the data engineering community over one day.

Who should attend underCurrent: Data Engineering?

Anyone who works closely with data and wants to interact with their community, grow their skills, and help shape the future of data engineering.

Is there a Call for Papers (CfP)?

No, the lineup is being curated by data engineering experts, and we’ve invited some amazing speakers.

If it’s free, does that mean it’s gonna be a bunch of vendor stuff?

Nope. Like it says above: no vendors, all technical. This event is designed for our community, by our community.

Who is hosting this event?

underCurrent will be hosted by Confluent.

Will the sessions be live streamed?

Not this time – we want to make sure everyone has a great in-person experience. But we hope to release session recordings after the event.

Lightning Talk at Current London

Limited spots

underCurrent: Data Engineering is limited to the first 100 registrants. 

Sign up now to reserver your spot!

Organized by Confluent
Hadar Federovsky
Akamai
Swaroop Oggu from Databricks as part of the Current Keynote
Hadar Federovsky
Akamai
Keynote
Now Streaming Live

Stream On: From Bottlenecks to Streamline with Kafka Streams Template

Tuesday, May 20, 2025
5:30 PM - 6:15 PM

How do you make 10TB of data per hour accessible, scalable, and easy to integrate for multiple internal consumers? In this talk, we’ll share how we overcame storage throughput limitations by migrating to Kafka Streams and developing a unified template application. Our solution not only eliminated bottlenecks but also empowered internal clients to build reliable Kafka Streams applications in just a few clicks—focusing solely on business logic without worrying about infrastructure complexity. We’ll dive into our architecture, implementation strategies, and key optimizations, covering performance tuning, monitoring, and how our approach accelerates adoption across teams. Whether you're managing massive data pipelines or seeking to streamline access for diverse stakeholders, this session will provide practical insights into leveraging Kafka Streams for seamless, scalable data flow.

Location
Breakout Room 6
Level
Intermediate
Audience
Data Engineer/Scientist, Developer, Executive (Technical)
Track
Apache Kafka

Mike Araujo

Principle Engineer, Medidate Solutions

How do you make 10TB of data per hour accessible, scalable, and easy to integrate for multiple internal consumers? In this talk, we’ll share how we overcame storage throughput limitations by migrating to Kafka Streams and developing a unified template application. Our solution not only eliminated bottlenecks but also empowered internal clients to build reliable Kafka Streams applications in just a few clicks—focusing solely on business logic without worrying about infrastructure complexity. We’ll dive into our architecture, implementation strategies, and key optimizations, covering performance tuning, monitoring, and how our approach accelerates adoption across teams. Whether you're managing massive data pipelines or seeking to streamline access for diverse stakeholders, this session will provide practical insights into leveraging Kafka Streams for seamless, scalable data flow.

Speaking at