When I interview a Java developer, if I see Spring Boot or Spring on the candidate’s resume, I may start with a simple question: “What is the default scope for a Spring bean”? Most people would get it right. I would then follow with a tricky question: “Does Spring make sure a Singleton bean thread-safe?” or “Does developer need to do anything to make sure a Singleton bean thread-safe?”.
When I say “tricky”, not because it’s tricky technically, but because half interviewees have no idea. The other half who correctly answered don’t always demonstrate solid understanding of Singleton and thread-safety. It’s okay to guess at an interview I guess.
Spring Boot is one of those popular frameworks for Java developers. Like most other Java frameworks, it provides proven reusable libraries and increases productivity. Some developers can probably make a living by simply being good at it.
However, because it encapsulates the interpretation of various Java Specifications, hides the complexity of design and implementation, often the framework itself imposes a serious impediment for developers to understand the underlining fundamentals.
Many Spring Boot developers don’t know Spring Boot is just a framework on top of another popular framework Spring Framework, which was initially a framework for Java Servlet applications. Most freshly-minted Spring Boot developers never heard of Servlet, not to mention web.xml. They only know their Spring Boot applications, “just run”. They never know why and how it runs.
Because of that, they never think of what the underlining Servlet Container is, what the default configurations (like Max Concurrent Requests) are, and how to fine tune those configurations. Imagine asking them to write a Java Web Application without Spring Boot?
Frameworks tend to wrap a lot of default features and behaviors under the hood, just to name a few: default Encryption Algorithm, default Socket Timeout, default Retry Strategy.
In the past, Frameworks might have configuration property for each “feature”, but this has changed in the recent years. Nowadays, Framework authors tend to favor “Convention over configuration”. Old configuration files are replaced by annotations with “sensible defaults”. Moreover, many of the features and behaviors are “discovered” automatically based on your running environments, like system properties, environment variables and what is in the class-path.
Several years back, I led a framework team. We built a Framework as the foundation for a slew of web applications that support multi-million $ business. We worked very hard to support all major features by default, and still allow each application to extend and override each feature by configuration and automatic discovery. I learned first hand, it’s even harder for application developers to fully understand how each feature worked and how to extend or override them.
Naturally, due to the lack of visibility and transparency of frameworks, people makes a lot of assumptions about frameworks, such as Singleton bean thread-safety. Some of the assumptions will definitely haunt the team down the road if the technical leads on the team didn’t review the design and code carefully.
Overtime, frameworks will evolve or die. If you ever worked with Struts 1.x framework, and if you didn’t understand Java Servlet, you would have a difficult time to migrate your applications to Struts 2.x or Spring.
Frameworks are your tools, not your crutches. If you don’t think out of the box of Spring Boot, you can’t professionally outgrow Spring Boot. Simple. Period.
That is true to other frameworks too.
Frameworks can help you get started quickly, but understanding the underlining principles will help you in the long run.
To build resilient software applications, when architecting the integration points with downstream services, we shall consider all error scenarios. Robust error handling is essential. Retrying remote API calls is an important part.
A retry can be done either synchronously or asynchronously. If the clients require a response of the execution status, not just the acknowledgement of the receipt of the request, it’s appropriate to implement synchronous retries with limits on total retry numbers and time. On the other hand, if the clients don’t care about the actual execution status, or have ways to receive responses asynchronously, it is almost always a good idea to adopt asynchronous retry architecture. Of course, before putting a request into asynchronous retry process, we can always implement synchronous retry first whenever it makes sense.
In this article, we will focus on the asynchronous retry architecture.
2. Queuing for Asynchronous Retry Architecture
Queuing mechanism is the center of the Asynchronous Retry Architecture.
The originating service constructs a Retry Message that includes the original request info, the destination URL and other metadata, puts the Retry Message into an Async Retry Queue based on the chosen queuing system. A trigger could be configured in the queue to trigger a processor. Or, an Async Retry Processor can pull the queuing system for new messages. The Async Retry Processor can then utilize the message received from the Async Retry Queue and make another call to the destination downstream service.
A Dead Letter Queue is used to hold Retry Messages for certain period time after a (configurable) maximum number of retries have been reached.
The below figure is a very high level workflow and message flow:
3. Asynchronous Retry Architecture Diagram
Asynchronous Retry Architecture
In the above diagram, Service A is the calling service and Service B is the destination downstream service. If the initial call in Step 1 fails, Service A will put a Retry Message into the Async Retry Queue.
Depends on what Queuing System is chosen, either a trigger can be configured in the Async Retry Queue to trigger the Processor (3.1), or a Processor can be configured to poll the Async Retry Queue (3.2). If AWS SQS is chosen as the queuing system, a Lambda function can be configured to trigger the processor when a new message arrives.
Once the Async Retry Processor receives the Retry Message, it can use the request info in the message to reconstruct the request, and send the request to the destination URL that is also included in the retry message.
A Retry Message will be moved to the Dead Letter Queue if the maximum retry attempts have been reached as detected by the Async Retry Processor or the Async Retry Queue.
4.Generic Data Model for Retry Message
A Retry Message can have a generic data model as below:
<span id="60ab" class="hc wp tx so wq b gg wr ws x wt" data-selectable-paragraph="">{
"request":{
"url":"http[s]://$host:$port/$destitnation_endpoint_including_query_parameters",
"method":"GET|POST|PUT|DELETE|PATCH",
"payload":"$json_string",
"headers":[$headers_to_pass_to_the_target_service]
},
"receivedCount": "$number",
"async-retry-queue":"$async-retry-queue[optional]",
"dead-letter-queue":"$dead-letter-queue[optional]"
}</span>
With this generic data model design for Retry Message, an Async Retry Processor can be designed to process any Retry Message constructed by any originating services (Service A) to any destination services (Service B).
5. Retryable Errors
Only non-functional errors are retryable. Below are some examples:
a. No response at all;
b. Temporary Network issue, usually 5xx (http status) errors;
c. Request timeout: http status 408 errors;
d. Conflict: http status 409 errors;
e. Too many request: http status 429
f. If there is a Retry-After header in the http response of the downstream service;
h. Unauthorized: http status 401 errors with expired token error code/message. These kind of errors usually require a new token. In this case, the Async Retry Processor is responsible for getting the proper token.
6. Conclusion
Asynchronous Retry Architecture can be used to handle all retryable errors when the client is not expecting the execution result in the response of the call. It is extremely useful if a function may need to be tried many times for a long period of time.
The number of maximum retry attempts, the async retry queue name/url, and the dead letter queue name/url can all be configurable. The configurable values can make architecture flexible for many different applications.
如今,三家投资机构正在努力刺激工具和平台的开发,来提高研究者获取和使用这些数据的能力。在华盛顿特区举行的第7届医疗数据研讨会上,(美国)国立卫生研究院(National Institute ofHealth,简称NIH)、总部在英国的威康信托基金(Wellcome Trust)以及霍华德•休斯医学研究所(Howard Hughes Medical Institute)宣布了首届开放科学奖(Open Science Prize)的6支决赛队伍名单。
根据世界卫生组织(World Health Organization)的说法,空气污染是导致8分之1全球死亡病例的罪魁祸首,然而空气质量数据一直被存储在不起眼的网站上,难以访问,同时格式也不一致。OpenAQ平台(https://openaq.org/#/)原型将数据进行了合并和标准化,成为公众可得、实时的空气质量数据。它已经收集和分享了来自13个国家500多个地点的970万空气质量检测数据。
当美国食物和药品管理局(U.S Food and Drug Administration)批准一种药物时,该机构公开发布一系列关于该药物的信息,通常包含先前未公开的临床试验。尽管这些信息相当有价值,但难以获得、收集和搜索。OpenTrialFDA努力建立一个用户友好的网站界面让任何人能访问相关信息,还提供应用接口(API),允许第三方平台接入和搜索数据。(https://www.openscienceprize.org/p/s/1844843/)