![]() After a quick spike, we realized we could drastically increase the write performance of our system if we restructured the schema. After some analysis, we found that the bottleneck in the processing was the writes being made into our database. We hit the issue of database coupling in a big way: we had a need to increase the throughput of the system in order to provide faster feedback to the traders who used the system. Throughout this chapter, we’ll take a look at these issues and explore patterns that can help us.īefore we start with splitting things apart, though, we should look at the challenges-and coping patterns-for managing a single shared database.īack in Chapter 3, I discussed my experiences in helping re-platform an existing credit derivative system for a now defunct investment bank. We need to consider issues of data synchronization during transition, logical versus physical schema decomposition, transactional integrity, joins, latency, and more. Splitting a database apart is far from a simple endeavor, however. This leads us to the conclusion that when migrating toward a microservice architecture, we need to split our monolith’s database apart if we want to get the best out of the transition. However, we need to address the elephant in the room: namely, what do we do about our data? Microservices work best when we practice information hiding, which in turn typically leads us toward microservices totally encapsulating their own data storage and retrieval mechanisms. As we’ve already explored, there are a host of ways to extract functionality into microservices. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
December 2022
Categories |