
In the last post, we saw how deep-checks & health-checks can prevent your application from failing due to a bad deployment. Continuing with the same theme, we will take a look at how can we enforce better testing in a service-oriented architecture even before the code is merged to the main branch of a service.
Understanding the problem
We now know that a good cloud architecture should have resiliency against a bad deployment through health probes. Though this also means that someone was able to check-in bad code & that went to the deployment stage. Also as your service becomes complicated, testing all integration points through probes will become a challenge. A typical cloud application can have an architecture that looks as below

There are multiple integration points & a bug in one of the services can result in complete application failure when these errors bubble up. Something as simple as renaming a field can go unnoticed until the code reaches staging or canary(And in worse case production).
There are 2 entities in any API integration i.e. a producer service that serves an API & a consumer service that consumes this API. If the producer changes the API contract by changing a field name from response body or introducing a new field, it can break the consumer service when it tries to invoke the API. Similarly if the consumer service renames a field in the response that it is expecting from producer, it can again result in a failure.

Even with the best intentions, developers are unable to test all integration points. The code if unit-tested is usually done with mocks for integration points that are completely in control of test author & doesn’t guarantee 1:1 mapping with production environment. There is an option to go through an end-to-end testing workflow which is usually time-consuming & flaky. My experience of working across multiple teams is that almost all service owners treat their staging environment as testing ground & experimentation which with time digresses away from an actual production environment.
You will usually notice the impact of this issue when the code has already been merged & either someone notices Splunk logs for staging or once you are added to the Slack channel for customer issue. One solution to this is to enforce all changes to go through an end-to-end testing flow by running API tests though that will slow down your developer velocity & teams will end up battling flaky tests instead of building features.
Introducing contract testing
Contract testing as the name suggests is testing your code based upon a contract that is agreed about the services on both sides of integration point. Any changes in contract are visible on both consumer & producer side which means you get the feedback immediately & you are able to catch issues the moment you raise your PR & not when the code is actually deployed.
Contracts will test both sides of integration points which means you remove the risk which was introduced by using mocks that were completely in your control. You also don’t depend upon heavy test setup which was required for end-to-end tests as services can generate stubbed responses based upon the contract.
Here is how the testing flow looks like in the world of contract testing(The image is from Spring cloud contract documentation)

- The contracts can be part of producer repository, consumer repository or a standalone repository
- Producer service uses this contract to verify their implementation & push the stubbed response to a central artifactory
- Consumer service uses these stubs to run its tests to verify the integration
- If the producer makes a change that breaks the contract, the contract verification will fails its build
- If consumer makes a change that breaks the contract, its integration tests will start failing
Contracts in action
Lets now see the contract testing in action. All the code for this demo is present on this Github repository. We have 2 services i.e. a producer-service & a consumer-service. The producer-service provides a functionality for fraud checks through a REST endpoint i.e. /fraudcheck. The consumer-service consumes this endpoint to process a transaction. We have a Java module that defines the contract using Spring cloud contract.
The contract specifies that if a PUT request is made for /fraudcheck endpoint with any client.id & loanAmount equal to 99999, the producer-service should return a response body with 2 fields i.e. fraudCheckStatus & rejection.reason along with content type of JSON. The below contract is in Groovy though you can also use YAML to define the contracts.
Contract.make {
request {
method 'PUT'
url '/fraudcheck'
body([
"client.id": $(regex('[0-9]{10}')),
loanAmount : 99999
])
headers {
contentType('application/json')
}
}
response {
status OK()
body([
fraudCheckStatus : "FRAUD",
"rejection.reason": "Amount too high"
])
headers {
contentType('application/json')
}
}
}
Now the producer-service will reference the above contract while building the project & if the implementation is not in-line with the contract then it will result in a failure. If the implementation is correct then the producer-service will publish the stubbed response to the artifactory. The producer-service is able to do this by using a Maven plugin to verify the contract as well as to publish the artifact. It references the contracts defined in contractDependency.
<plugin>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-contract-maven-plugin</artifactId>
<extensions>true</extensions>
<configuration>
<baseClassForTests>com.varunu28.producerservice.BaseTestClass</baseClassForTests>
<testFramework>JUNIT5</testFramework>
<contractsMode>REMOTE</contractsMode>
<contractsRepositoryUrl>file://${user.home}/.m2/repository</contractsRepositoryUrl>
<contractDependency>
<groupId>com.varunu28</groupId>
<artifactId>contracts</artifactId>
<version>0.0.1-SNAPSHOT</version>
</contractDependency>
<contractsPath>contracts/producer-service</contractsPath>
</configuration>
</plugin>
The consumer-service on the other hand uses the stubbed implementation to run its integration tests. It doesn’t need to run an application server as the stubbed response has already been verified & built by the producer-service so it reflects the actual implementation. Here is a minimal test(You can extend this further by inspecting individual attributes of response body).
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@AutoConfigureStubRunner
public class ConsumerServiceApplicationTests {
@StubRunnerPort("producer-service")
int producerPort;
@Autowired
private LoanApplicationService loanApplicationService;
@BeforeEach
public void setup() {
loanApplicationService.setPort(producerPort);
}
@Test
public void shouldBeRejectedDueToAbnormalLoanAmount() {
String response = loanApplicationService.loanApplication(99999, "1234567890");
assert (response.contains("FRAUD"));
}
}
The above workflow can either be done by a developer manually where they follow the below steps:
- Make a change to contract in the contract repository
- Clone the
consumer-service&producer-servicelocally - Run the build on
producer-serviceto verify if the implementation is in-line with the updated contract - Run the integration tests for
consumer-servicewith updated stubs
All of the above steps can also be packaged as part of a CI workflow. I have done this using Github workflows as below:
name: Spring Cloud Contract Verification
on:
push:
paths:
- 'spring-cloud-contract/**'
pull_request:
paths:
- 'spring-cloud-contract/**'
jobs:
contract-verification:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up JDK
uses: actions/setup-java@v4
with:
java-version: '21'
distribution: 'temurin'
cache: 'maven'
- name: Build and install contracts
working-directory: ./spring-cloud-contract/contracts
run: mvn clean install
- name: Build and install producer-service with stubs
working-directory: ./spring-cloud-contract/producer-service
run: mvn clean install
- name: Verify consumer-service with generated stubs
working-directory: ./spring-cloud-contract/consumer-service
run: mvn clean install
- name: Test Results Summary
if: always()
run: |
echo "Contract verification completed"
echo "All services have been built and tested against the contract"
For demo here is a run of the above workflow where I purposefully tried to break the contract by changing a field name

The PR for above change resulted in a CI failure due to producer-service implementation not fulfilling the updated contract

Misuse of contract testing
Be cognizant of the fact contract testing as the name suggests is to test the actual contract & not the implementation details. So in the above example, if consumer-service supports 10 different payment methods then you shouldn’t be using contract testing for all payment methods. You should ideally be testing the API contract for the success & the failure scenario. For the internal implementation details, you can continue using the unit tests with mocked responses if changing a payment method type doesn’t changes the underlying API contract.
Conclusion
Contract testing is a very powerful tool to do robust testing in an application with multiple integration points. It allows you to get instant feedback whenever you make changes to your code on the API level instead of discovering such issues when the issue has already impacted customers. It can also be useful if you don’t want to chase after multiple teams whenever you want to assess the impact of a change in the contract. Just running the test workflow will allow you to verify the impact of your changes on the services that consume your APIs.
Happy learning!
References
