Being reactive in distributed systems is critical, but what does that really look like at scale with Terabytes or Petabytes of data ingestion per day, or what does it mean in applications and deployment architecture?
There is a need to simplify. How can we build resilient, self healing systems that run at massive scale which don't lose data, support rigorous requirements, in the chaos of big data, partial failures, split brain, and eventual consistency? How would you build awareness and intelligence into your systems if 'everything fails all the time' was a starting point?
This talk looks at the problems differently, with reactive strategies and technologies that collaborate and how they help achieve more stable, self-aware systems.
Helena has been building large-scale, reactive, distributed cloud-based systems for many years, distributed big data systems for the last four, choosing Scala, Akka and Kafka for the core of all. She will discuss simplification of big data architecture, data flows, and a collaborative set of supporting technologies.