Skip to main content

Micro-Service with mind-map

 





If we were to give a definition to micro service, what will it be?
A simple one is an architectural style, that functionally decomposes an application into a set of services, each service has a focused, cohesive set of responsibilities.

Similar to most, it has to have some properties & practices, which we can categorize to a general ones and detailed ones “12-factors”. 

Away from the 12 factor, Some general practice that be considered while decomposing a services:
  •  Loosely coupled: minimum communication between Services.
  • Cohesion: elements that are tightly related to each other and change together should stay together "Common Closure Principle (CCP)".
  • Single responsibility principal (SRP): every micro-service should do one thing and do it exceptionally.
When constructing an application or defining its architecture, we follow below three-step process:
  1. Identifying the system operations, functional requirement, which are the user stories and their associated user scenario.
  2. Defining services by applying the Decomposition.
  3. Define service API and collaboration.
As we’ve stated within definition along point #2, we need to decompose the application “Breakdown” which can be done by following two ways either follow business capability or by subdomain.

Subdomain decomposition depends largely on the developer’s understanding of the domain and experience in designing software systems. It focuses on how different parts of the system are structured, connected and depend on each other from a technical perspective. This approach can reveal separations that are not obvious in business-focused models but it may overlook important distinctions within the domain that are better captured through business-based thinking.

For properties & practice we can check Not all Ps sting

Patterns:

Implementing use-cases that span multiple services requires the use of unfamiliar techniques. 

  • Each service has its own database, which makes it a challenge to implement transactions and queries that span services. 
    • Change Data Capture (CDC) is a technique that streams database changes from one service's data store to other services, ensuring data consistency and enabling event-driven architectures.
    • Micro-services based application can’t retrieve data from multiple services using simple queries. Instead, it must implement queries using either API composition or CQRS views. - Command Query Responsibility Segregation, as the name suggests, is all about segregation, or the separation of concerns. It splits a persistent data model and the modules that use it into two parts: the command side and the query side.
  • Micro-services based application must use what are known as sagas to maintain data consistency across services.
    • Transactions = ACID (Atomicity, Consistency, Isolation, Durability) 
    • With sagas, Transactions are ACD (Atomicity, Consistency, Durability), since services are loosely coupled “isolation”.
  • Use semantic versioning when applying changes in API In order to support multiple versions of an API, the service’s adapters that implement the APIs will contain logic that translates between the old and new versions.
  • API gateway similar to the Facade pattern from object-oriented design. Like a facade, an API gateway encapsulates the application’s internal architecture and provides an API to its clients. It may also have other responsibilities, such as authentication, monitoring, and rate limiting.
    • Request routing.
    • API Composition.
    • Protocol translation.
    • Edge functions “Caching, logging, Rate limiting, Authentication, Authorization ”.

  • Other patterns to consider 
    • Strangler design patten for migration act as a proxy, traffic is directed to the old monolith service till the new micro service one is tested then we can direct the traffic to the new one.
    •  Listen to your self pattern, scenario --> Updating a local NoSQL database and also notifying a legacy system of record about the activity maintain consistency with the monolith system of record. need to decouple


References:
 - Microservices IO
 - Microservice book

Comments

Popular posts from this blog

Not all Ps sting

  If someone meant to say Ps and pronounce it Bees. would this confuse you :). Ps is for the P that is the start of Properties and Practice Each application should have some properties and follow certain practices. Properties: Below are 5 properties we should try to have in our application with a small description of how to include them Scalable, Scale => Increase workload (horizontally scaling) Statless, no state should be shared among different application instances,  Concurrency, concurrent processing = Threads. Loosely coupled, decompose the system into modules, each has minimal dependencies on each other "modularization", encapsulating code that changes together "High cohesion".  API first, Interfaces, implementation can be changed without affecting other application. favor distribution of work across different teams.  Backing Services, "DB, SMTP, FTP ..." , treating them as attached resources, meaning they can easily be changed. Manageable, changi...

The post-office & the postman

If we were to talk about old messaging system where there exist post-office, postman & mailbox. each component had its own functionality that we looked for when trying to visualize how those component where to interact in a computerized version. Simple scenario: Mail is added in mail box Postman arrive pick mails from his area mailboxes and take them to the post-office. Post-office organize mails by areas. Postman takes mails related to his area "distribute it in mailboxes". A person can go to post-office and  pick his own mail "in case of failure or wishes for early delivery". Mapping in a computerized version: Scenario: Observer design pattern which can use push or pull scenario, to inform those whom are registered for an event about its occurrence. Component: Post-Office = Message-Broker Post-Office-Box = Message-Storage-Validity Mailbox = Topic/Queue Postman !!! where's the postman ? Apache kafka act as a message broker which d...

digging

Open SVG image in a browser, use arrows to navigate When you say digging, 1 st thought, most would think that you would plant a tree. How about digging in DATA 1 st Hadoop is a framework for processing large chunks of data, consisting of 2 modules HDFS: Hadoop Distributed File System "for managing files". Map-Reduce: hadoop methodology for processing data, where big chunks of data is divided into smaller chunks, each directed to the map f n to extract the needed data from, then the reduce f n where the actual processing we need takes place. Hadoop work on the whole data, in one time,  so it is considered Batch processing. 2 nd Hadoop eco-system It would be annoying, that each time you wish to do a task, you write a java code for each of the map function, then the reduce function, compile the code.. etc. yet Hadoop eco-system provide us with tools that could do so for us PIG: a scripting language "that is translated in the background to a ...