The business is calling for real-time data, customers are demanding secure platforms to house their personal information and developers are pushing for faster access to complex platforms to drive new breed applications. Real-time data is strategic and central to every digital transformation. IoT, connected devices, smart factories, microservices, Industry 4.0 standards, are all examples of systems and deployments that rely heavily not only on data but the ability to quickly gain access to large amounts of real-time data. While the value of data is clear there are challenges building, operating and securing these data applications and platforms. 

These real-time data systems and methodologies require flexibility in data consumer add, so that introducing new consumers (devices, functions and even new microservices) to the data pipeline can occur with ease and minimal, or no, rewrites to the underlying data system.

In response to these emerging data requirements and application needs, Event Driven Architecture (EDA) has emerged as a leading architecture of modern software applications with Apache Kafka sitting in the middle managing high throughput streams and enabling data consumption in real-time.

Apache Kafka is all about data and moving large amounts of it from one place to another by providing a high-throughput, distributed messaging system addressing the shortcomings of traditional data movement tools and approaches. At a high level, Apache Kafka is a published/subscribe messaging system that is comprised of Producers (members that create data), Consumers (members that consume data), Topics (locations to store data), Partitions (the location a log file for a particular topic is stored) and Brokers (the distributed process that houses the topics). Apache Kafka provides a highly available, redundant architecture, that is easily scaled and highly resilient to failures. It achieves this by grouping Brokers into distributed deployments, referred to as Clusters, that are managed by Apache Zookeeper, enabling a truly distributed system architecture.

Kafka and Kubernetes working together enable low cost, elastic, data stream capabilities to create a powerful solution, but can be black boxes that are also difficult to maintain and operate and provide little security and data governance. These systems are also highly customizable and tunable, which is a benefit and, at the same time, a disadvantage as it requires highly skilled resources to manage and maintain.

Native12’s Innovation Center, driven by our Continuous Infrastructure and Continuous Intelligence practices, has been working on creating better ways of operationalizing Kafka and handling the day to day complexities and requirements for operating and maintaining a secure, highly available and scalable Kafka deployment, with the objective of making Kafka easy.

Our Innovation Center, focused on two key components of the system and its operations to build a holistic, scalable, and easily operationalized solution for our enterprise customers:

Deployment – The infrastructure layer Kafka is deployed on.

With the increasing flood of data in Kafka solutions and Kafka’s own seemingly limitless theoretical scale, it is very critical to think about scaling and managing the infrastructure that Kafka resides on. Building Kafka on-top of Kubernetes over a traditional 3-tier infrastructure isn’t an optimal solution. We quickly identified the need for a hyper-converged container solution to fully take advantage of Kafka. This is where our Innovation Center chose to partner with Diamanti and take advantage of their bare-metal hyper-converged infrastructure based on the Diamanti Enterprise Kubernetes Platform. This solution helps solve Kubernetes infrastructure problems by providing a reliable, efficient, and secure cloud-native platform spanning on-premises bare-metal clusters and public cloud providers—integrating high-performance compute, plug-and-play networking, persistent storage, Docker, and Kubernetes into one simple solution with full-stack support.

DataOPS – Monitoring, Data investigation and exploration, Security and Data Governance.

As with any solution or system in production, it is critical to monitor the performance and availability of your Kafka deployment, as well as apply your enterprise security model on those data feeds.

Monitoring your Kafka deployment is more involved than the traditional system monitoring applied to other solutions. You need visibility into the pieces of the system that enable you to gain insight into the health of your Kafka services, monitor your brokers, schema registry, connect clusters, zookeeper deployment, and the list goes on.

Applying your enterprise security model to real-time data streams can also be tricky. This is a high throughput system handling many messages that have critical delivery times, adding any lag or latency into these pipelines can have ripple effects on the applications consuming them. As well as now creating large amounts of data that have business-critical, customer critical, and regulated data.

Our Innovation Center realized that building custom in-house applications within our customer ecosystem was not the right solution to these data ops issues. Those custom deployments are difficult to manage, require costly resources and don’t scale easily. That’s why we chose to partner with Lenses.io, the leader in the data ops market that allows you to build your data ops platform over Kafka. Lenses.io enabled our customers to monitor, administer and secure their Kafka deployments with minimal effort.

Together with our partners at Diamanti & Lenses.io, the Native12 team was able to operationalize Kafka enterprise deployments with our customers, and build a scalable, secure & highly available system that adhered to their enterprise requirements, while maintaining the frictionless access required by their development teams and applications, unlocking the power of their own data!

If you’d like to learn more about this Turnkey Kafka solution and how our Innovation Center engineering team deployed and delivered it, or have any questions or comments, please reach out to us at sales@native12.com or message me directly.

The Native12 Innovation Center,
Sami Abunasser, Founder & Chief of Engineering

Leave a comment or send us a note