Application architectures have evolved to separate the frontend from the backend and further divide the backend into separate microservices.
Modern distributed application architectures have created the need for API gateways and have helped popularize API management and service mesh technologies.
Microservices provide the freedom to use the most appropriate type of database based on the needs of the service. Such a polyglot persistence layer raises the need for capabilities similar to API Gateway Services, but for the data layer.
Data gateways act like API gateways but focus on the appearance of the data. A data gateway provides capabilities for abstraction, security, scaling, federation, and contract-based development.
There are many types of data gateways, from traditional data virtualization technologies and lightweight GraphQL translators, to cloud-hosted services, connection pools, and open source alternatives.
There is a lot of buzz around 12-factor apps, microservices, and service mesh these days, but not so much around cloud-native data. The number of conferences, blog posts, best practices and tools specifically designed to access cloud native data is relatively low. One of the main reasons for this is that most data access technologies are designed and built on a stack that favors static environments rather than the dynamic nature of cloud and Kubernetes environments.
In this article, we'll explore the different categories of Managed Gateway Services, from the most monolithic to those designed for the cloud and Kubernetes. We will see what are the technical challenges introduced by the microservices architecture and how data gateways can complement API gateways to meet these challenges in the Kubernetes era.
Application Architecture Evolutions
A similar change has occurred with backend services through the Microservices movement. Undocking the frontend was not enough and the monolithic backend had to be undocked in a limited context that allowed for fast independent launches. These are examples of how architectures, tools and techniques have evolved under the pressure of business needs for rapid delivery of application software on a global scale.
This brings us to the data layer. One of the existential motivations of microservices is to have independent data sources per service. If you have microservices that touch the same data, sooner or later you will introduce coupling and limit scalability or independent release. It is not only a stand-alone database, but also heterogeneous, so that each microservice is free to use whatever type of database meets their needs.
Application Architecture Evolution Brings New Challenges
While decoupling the frontend from the backend and dividing monoliths into microservices gave the desired flexibility, it created challenges that didn't exist before. Service discovery and load balancing, network-level resiliency, and observability became important areas of technological innovation addressed in subsequent years.
Likewise, creating a microservice database, having the freedom and choice of technology from different data warehouses is a challenge. This is seen more and more recently with the data explosion and demand for data access not only for services but for other AI / ML and real-time needs as well.
The Rise Of API Gateways
With the growing adoption of microservices, it has become clear that operating such an architecture is difficult. While all separate microservices sounds great, it does require tools and practices that we didn't need or didn't have before. This resulted in more advanced release strategies like blue / green deployments, Canary builds, dark builds. Then this led to fault injection and self-recovery tests. And finally, it led to advanced network tracking and telemetry. All of these created a whole new layer that sits between the frontend and the backend. This layer is primarily occupied by API management gateways, service discovery, and service mesh technologies, but also by tracking components, application load balancers, and all kinds of monitoring and management proxies. of traffic. This even includes projects like Knative with zero activation and scaling capabilities based on network activity.
Over time, it became clear that building microservices at a rapid pace, operating microservices at scale requires tools that we did not need before. Something that was completely managed by a single load balancer had to be replaced with a new advanced management layer. A new layer of technology is born, a new set of practices and techniques and a new group of responsible users.