One of the key building blocks of NuWave’s advanced predictive analytics solutions is a service management platform we built called Dex, which is short for Deus Ex Machina. Dex was built as collection of microservices using Spring Boot and is responsible for coordinating the execution of complex workflows which take data through acquisition, ingestion, transformation, normalization, and modeling with many different advanced machine learning algorithms. These processing steps are performed with a large number of technologies (Java, Python, R, Knime, AWS SageMaker, etc.) and are often deployed as Docker containers for execution on either dedicated servers, EC2 instances, or as AWS Fargate tasks. Dex seamlessly manages the coordination of the execution of all these ephemeral tasks across these various technologies while also providing runtime configuration, secure credential management, state management, and storage provisioning for each executing task. This has led to the affectionate nickname of “Cat Herder” for this critical technology at the heart of our advanced analytical solutions.
With such a complex processing workflow spread across multiple underlying technologies, we faced the challenge of tracing and debugging its activity. While technologies such as AWS CloudWatch make it easy to consolidate logs to a common location, we needed a way to follow an individual thread of processing for a single multi-step (multi-microservice) workflow through the logs. In Dex’s largest deployment to date we have a monthly data ingestion workflow that pulls data from over three dozen different sources, normalizes it, transforms it to a common schema, and then runs a large number of baseline predictive models before finally updating an arbitrary number of “what if” and “likelihood” scenarios depending on what analysts are currently monitoring. This often results in several dozen workflow steps running concurrently with all of them interacting with Dex’s services which produces logs filled with messages from a tapestry of interleaved processing.
With the goal of making this tracking much easier, we decided to introduce a client identifier to each of our microservices’ REST interfaces. This identifier would be a string composed of a processing step identifier combined with the unique tracking identifier specific to the underlying environment performing the step (e.g. an AWS Fargate task identifier). We quickly realized that if we were to include the client identifier as an explicit parameter for each request it would be a large engineering effort to revise each of the microservice APIs and update the corresponding REST client libraries in several different languages. In addition, we would then be faced with coordinating the roll out of all of the revised clients as each microservice’s interface was updated.
To sidestep this engineering effort and mitigate the disruption of this change, we decided to instead support the new client identifier as an optional request header in each microservice’s REST API. This allows the them to seamlessly support legacy client requests and allows us to update the various workflow step containers as opportunity allows. And thanks to Spring Boot, the server-side changes were trivial for each endpoint. We simply needed to inject a custom client request handler to extract the optional client identifier from each incoming REST request. If the handler finds a client identifier, it sets it in the diagnostic context for the logging system so that all log messages resulting from handling the request are tagged with the client identifier.
The client request handler is done by implementing our own HandlerInterceporAdapter as follows:
As you can see, the handler is simple. It simply looks for the custom header parameter and places its value into the mapped diagnostic context of the SLF4J logging system. While we use a default value for a missing client identifier, you could easily enhance this to use the source IP address of the request in the absence of an explicit client id. With the customer handler developed, we simply need to ensure that it is deployed with each endpoint. This is done by injecting our instance of Spring WebMvcConfigurer which is responsible for adding the handler to the registry of interceptors Spring Boot uses for request handling:
By including this bean in the build of each microservice, we now have support for our custom client identifier. The only step left is updating each microservice’s logging configuration to include the client identifier. This is done in Spring Boot application.properties file for each service by setting the logging pattern to include a reference to the property set in the mapped diagnostic context:
This pattern gives us log messages that look like the following:
From the client library’s perspective, setting the header is equally as easy:
And with that, we’ve successfully introduced the client identifier to all of our services while maintaining compatibility with the existing clients. Custom request headers are powerful technique for supporting universal parameters for REST requests; they’re not just for standard request information and authentication tokens. Furthermore, the technique of using the logging diagnostic context is valuable for many other situations where log messages need to be tagged with information specific to a given processing context (such as a server side watchdog thread that runs periodically.)