Yes & No
We draw a distinction in style of testing, as in we would write tests in a integration style but not as far as a database. We have a common data access layer and mock out data returned with test data. Code above the data layer is all tested with concretes, so our tests do end with lots of data setup (but using Builders/Mothers/Factories we can enhance common data providers to many tests). The tests all run from the highest level possible and pass through all units - we have abstracted away the http/message infrastructure so our starting units are after the "command" reaches the domain. In the past we would test each unit individually then moved over to a more concrete implementation testing. Our conclusion is: 1. We have far fewer tests to maintain 2. Our tests are more resilient to change - e.g. one dependency change/refactoring doesn't result in 45 tests needing to be changed, but just update in a common data change (our tests focus on behaviours being met) if the change has no material change to expected output. 3. We can test integration of units quicker - e.g. using strategy/command patterns and only testing units that are mocked has bitten us badly, so testing full concrete implementations ensured correctness. Where we deviate from the above pattern is where our tests need to test multiple paths in a specific unit (i.e. when the result is null given 3 out 6 inputs, constructor testing, builder testing, etc.), the cost of testing this from the upper level would have infeasible. Then we hit that unit directly with tests. With this approach we have seen our developers making better testing decisions and code evolving better as they are not swamped with updating tests just because they need to do a refactoring for new functionality.