Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. Web Development
  3. ASP.NET
  4. Communication between 2 MVC API's on the same server

Communication between 2 MVC API's on the same server

Scheduled Pinned Locked Moved ASP.NET
questioncsharpasp-netarchitecturedatabase
5 Posts 3 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • F Offline
    F Offline
    Fred2834
    wrote on last edited by
    #1

    Hello ! I have a question about what the best approach/design pattern could be to achieve communication between two or more MVC API's on the same machine. Currently I have a single Web API delivering data to my web site. For reasons too long to explain here, we are refactoring the API to split it in multiple modules. One being the main API with the core information (think : Order database with detailed info), and the satellite API's having lists of orders, but without their details. Of course, when a satellite API gets a request, it could just reply with the list of orders (which it knows of), but I want it to reply with the details of each orders .. Important to mention is that all data is in memory, no database is behind the whole thing (otherwise it would be trivial). Knowing that all API's would be on the same server, and that I obviously want to avoid ping-pong between the client site and the API's .. I would like to have my satellite API ask my main API about this information. I have looked into WCF services, but I am not sure if that's the appropriate solution, and also about how to integrate it in my existing projects. I do already have the multiple API's running (not in prod) as normal ASP .Net MVC projects. In an ideal world, I would simply "expose" a few methods from the main API, so that the other ones can call it. I've also looked at the other ways to do IPC, but I am unsure which would be best in my case. Side questions about WCF Services: - do they have to be a project of their own ? or could I say that a few methods in my API are the actual service ? - What about the service reference that the client has to know of ? how do I specify where the service actually runs ? This is not necessarily known during dev, what if the server is hosted elsewhere ? - Do I have to care about async calls here ? Note: I'm obviously new to WCF, be kind :) Thanks a lot Fred

    A N 2 Replies Last reply
    0
    • F Fred2834

      Hello ! I have a question about what the best approach/design pattern could be to achieve communication between two or more MVC API's on the same machine. Currently I have a single Web API delivering data to my web site. For reasons too long to explain here, we are refactoring the API to split it in multiple modules. One being the main API with the core information (think : Order database with detailed info), and the satellite API's having lists of orders, but without their details. Of course, when a satellite API gets a request, it could just reply with the list of orders (which it knows of), but I want it to reply with the details of each orders .. Important to mention is that all data is in memory, no database is behind the whole thing (otherwise it would be trivial). Knowing that all API's would be on the same server, and that I obviously want to avoid ping-pong between the client site and the API's .. I would like to have my satellite API ask my main API about this information. I have looked into WCF services, but I am not sure if that's the appropriate solution, and also about how to integrate it in my existing projects. I do already have the multiple API's running (not in prod) as normal ASP .Net MVC projects. In an ideal world, I would simply "expose" a few methods from the main API, so that the other ones can call it. I've also looked at the other ways to do IPC, but I am unsure which would be best in my case. Side questions about WCF Services: - do they have to be a project of their own ? or could I say that a few methods in my API are the actual service ? - What about the service reference that the client has to know of ? how do I specify where the service actually runs ? This is not necessarily known during dev, what if the server is hosted elsewhere ? - Do I have to care about async calls here ? Note: I'm obviously new to WCF, be kind :) Thanks a lot Fred

      A Offline
      A Offline
      Afzaal Ahmad Zeeshan
      wrote on last edited by
      #2

      As you have tried to keep the question compact, the one-liner answer to this would be to use Microservices architecture and use each individual module as a separate service; read here [.NET Microservices. Architecture for Containerized .NET Applications | Microsoft Docs](https://docs.microsoft.com/en-us/dotnet/standard/microservices-architecture/)

      Quote:

      I have a single Web API delivering data to my web site.

      I can see a microservice here.

      Quote:

      when a satellite API gets a request,

      Another microservice here.

      Quote:

      Important to mention is that all data is in memory, no database is behind the whole thing (otherwise it would be trivial).

      Another microservice here, that manages the data for the entire system—keeping the data aside and giving it some high-availability, as if this service goes down, entire cluster is down. This is the point that you need to discuss with your team again.

      Quote:

      I would like to have my satellite API ask my main API about this information.

      Exactly the core benefit of microservice architecture, your clients would only need to know the website. Internal communication and services would be abstracted away from them.

      Quote:

      I have looked into WCF services,

      WCF is quite old and doesn't quite fit the modern demands of the solutions. If, for instance, you would want to go to cloud hosting solutions, then microservices and their modern approach to communication is much more scalable and feasible. WCF although supports various protocols, sometimes that extra optionality is a downside. Trust me with this—or not. :-)

      Quote:

      I would simply "expose" a few methods from the main API, so that the other ones can call it. I've also looked at the other ways to do IPC, but I am unsure which would be best in my case.

      Since, ASP.NET Web API is based on REST, talk about REST mostly and in REST you can use services like OpenAPI (Swagger) and generate a documentation for the users that they can access. [About Swagger Specification | Documentation | Swagger | Swagger](https://swagger.io/docs/specification/about/) Now coming to the last part of your question,

      Quote:

      do they have to be a proje

      F 1 Reply Last reply
      0
      • F Fred2834

        Hello ! I have a question about what the best approach/design pattern could be to achieve communication between two or more MVC API's on the same machine. Currently I have a single Web API delivering data to my web site. For reasons too long to explain here, we are refactoring the API to split it in multiple modules. One being the main API with the core information (think : Order database with detailed info), and the satellite API's having lists of orders, but without their details. Of course, when a satellite API gets a request, it could just reply with the list of orders (which it knows of), but I want it to reply with the details of each orders .. Important to mention is that all data is in memory, no database is behind the whole thing (otherwise it would be trivial). Knowing that all API's would be on the same server, and that I obviously want to avoid ping-pong between the client site and the API's .. I would like to have my satellite API ask my main API about this information. I have looked into WCF services, but I am not sure if that's the appropriate solution, and also about how to integrate it in my existing projects. I do already have the multiple API's running (not in prod) as normal ASP .Net MVC projects. In an ideal world, I would simply "expose" a few methods from the main API, so that the other ones can call it. I've also looked at the other ways to do IPC, but I am unsure which would be best in my case. Side questions about WCF Services: - do they have to be a project of their own ? or could I say that a few methods in my API are the actual service ? - What about the service reference that the client has to know of ? how do I specify where the service actually runs ? This is not necessarily known during dev, what if the server is hosted elsewhere ? - Do I have to care about async calls here ? Note: I'm obviously new to WCF, be kind :) Thanks a lot Fred

        N Offline
        N Offline
        Nathan Minier
        wrote on last edited by
        #3

        Yeah, I've played this game before, and while it's nice to think that everything is in memory and so it should be easy, there are some surprising hurdles that need to be cleared first. Here are some of my IPC notes. The most important initial concern that you need to address is: what is the exact nature of the communications that need to occur between the services? Is the communication simplex or do we need duplex? What sort of update rate can we expect to see? Do we need to acknowledge connection (like TCP) or can we fire-and-forget (like UDP)? Will you only ever have 2 communicating modules, or might you want to add more down the line? Defining the nature of the interactions will help you determine the appropriate technology to use. In the most basic model: one application writes data that can be read by another application, the easiest thing in the world is to have it write to a file that the other app(s) will read. This is a good, stable solution that works well in situations where race conditions are low-impact issues. If you need duplex communications, things become slightly more complicated. For most purposes, a LocalDB instance can be used to share data between applications. This can work very well for bi-directional communication between applications, but, like the file method, it does not work for event-based interactions: it just shares flat data. If the two processes need to talk, i.e. one needs to actively query the other, the level of complexity jumps considerably. You can query the other over the NIC using the standard WebAPI format and HttpClients, and that's likely to be your easiest solution. I know it sounds stupid, but having walked this path before, unless you're willing to spend some time architecting an IPC process, it is the easiest route. Another option is to have your process actively watch directories for changes and use the file approach, but there is a serious performance cost to implementing this solution. The other option is to make use of an Inter-Process Communication (IPC) technology. In .NET that generally means using Pipes or Memory-Mapped Files. Both have benefits and pitfalls, but are best suited to event-driven processes and the communication of complex data. Memory-Mapped files are the closest thing to what you're initially talking about. It's a section of memory that can be shared between multiple processes that has certain thread safety mechanisms that attempt to address concurrency problems. I can tell you from the start, there are complexities to using t

        F 1 Reply Last reply
        0
        • A Afzaal Ahmad Zeeshan

          As you have tried to keep the question compact, the one-liner answer to this would be to use Microservices architecture and use each individual module as a separate service; read here [.NET Microservices. Architecture for Containerized .NET Applications | Microsoft Docs](https://docs.microsoft.com/en-us/dotnet/standard/microservices-architecture/)

          Quote:

          I have a single Web API delivering data to my web site.

          I can see a microservice here.

          Quote:

          when a satellite API gets a request,

          Another microservice here.

          Quote:

          Important to mention is that all data is in memory, no database is behind the whole thing (otherwise it would be trivial).

          Another microservice here, that manages the data for the entire system—keeping the data aside and giving it some high-availability, as if this service goes down, entire cluster is down. This is the point that you need to discuss with your team again.

          Quote:

          I would like to have my satellite API ask my main API about this information.

          Exactly the core benefit of microservice architecture, your clients would only need to know the website. Internal communication and services would be abstracted away from them.

          Quote:

          I have looked into WCF services,

          WCF is quite old and doesn't quite fit the modern demands of the solutions. If, for instance, you would want to go to cloud hosting solutions, then microservices and their modern approach to communication is much more scalable and feasible. WCF although supports various protocols, sometimes that extra optionality is a downside. Trust me with this—or not. :-)

          Quote:

          I would simply "expose" a few methods from the main API, so that the other ones can call it. I've also looked at the other ways to do IPC, but I am unsure which would be best in my case.

          Since, ASP.NET Web API is based on REST, talk about REST mostly and in REST you can use services like OpenAPI (Swagger) and generate a documentation for the users that they can access. [About Swagger Specification | Documentation | Swagger | Swagger](https://swagger.io/docs/specification/about/) Now coming to the last part of your question,

          Quote:

          do they have to be a proje

          F Offline
          F Offline
          Fred2834
          wrote on last edited by
          #4

          Thanks ! I will dig deeper into the micro-services architecture, in fact, it's a bit my goal with each API. Good to know that WCF can be left out of the picture.. it was kind of everywhere when I googled looking for alternatives :s Fred

          1 Reply Last reply
          0
          • N Nathan Minier

            Yeah, I've played this game before, and while it's nice to think that everything is in memory and so it should be easy, there are some surprising hurdles that need to be cleared first. Here are some of my IPC notes. The most important initial concern that you need to address is: what is the exact nature of the communications that need to occur between the services? Is the communication simplex or do we need duplex? What sort of update rate can we expect to see? Do we need to acknowledge connection (like TCP) or can we fire-and-forget (like UDP)? Will you only ever have 2 communicating modules, or might you want to add more down the line? Defining the nature of the interactions will help you determine the appropriate technology to use. In the most basic model: one application writes data that can be read by another application, the easiest thing in the world is to have it write to a file that the other app(s) will read. This is a good, stable solution that works well in situations where race conditions are low-impact issues. If you need duplex communications, things become slightly more complicated. For most purposes, a LocalDB instance can be used to share data between applications. This can work very well for bi-directional communication between applications, but, like the file method, it does not work for event-based interactions: it just shares flat data. If the two processes need to talk, i.e. one needs to actively query the other, the level of complexity jumps considerably. You can query the other over the NIC using the standard WebAPI format and HttpClients, and that's likely to be your easiest solution. I know it sounds stupid, but having walked this path before, unless you're willing to spend some time architecting an IPC process, it is the easiest route. Another option is to have your process actively watch directories for changes and use the file approach, but there is a serious performance cost to implementing this solution. The other option is to make use of an Inter-Process Communication (IPC) technology. In .NET that generally means using Pipes or Memory-Mapped Files. Both have benefits and pitfalls, but are best suited to event-driven processes and the communication of complex data. Memory-Mapped files are the closest thing to what you're initially talking about. It's a section of memory that can be shared between multiple processes that has certain thread safety mechanisms that attempt to address concurrency problems. I can tell you from the start, there are complexities to using t

            F Offline
            F Offline
            Fred2834
            wrote on last edited by
            #5

            The end game looks something like this: - One main API - Several smaller API's, mostly independant, but occasionally requiring to get data from the main API's in-memory repository. If I had had the liberty to do it, I would have used Redis or something similar instead of having one in-memory repo for each API, but it's not a possibility due to company standards etc.. That would have been easy :) .. because aside from sharing some data, my API's don't really need to communicate.

            Nathan Minier wrote:

            what is the exact nature of the communications that need to occur between the services?

            Smaller API's would just request info, and the main API would reply, never the other way around.

            Nathan Minier wrote:

            Is the communication simplex or do we need duplex?

            Simplex. Only from the smaller API's to the main API, never the other way around (otherwise I'd have seriously screwed up my architecture..).

            Nathan Minier wrote:

            What sort of update rate can we expect to see?

            The actual update of the data in the memory of the main API is not an issue, I already have a flip-flop mechanism that prevents delivering data while it is being updated. So basically anyone asking data is only getting it from the "active" part of the memory, while the "inactive" is being updated. Thus the smaller API's would also query from the "active" part. And I'm not managing TB's of data either :)

            Nathan Minier wrote:

            Do we need to acknowledge connection (like TCP) or can we fire-and-forget (like UDP)?

            Fire & forget could do (it's on the same server) as long as I can ensure that there is no data loss (which screams more TCP than UDP anyway :s)

            Nathan Minier wrote:

            Will you only ever have 2 communicating modules, or might you want to add more down the line?

            One main, multiple "clients". They're not truly clients because all API's would be autonomous and managing their own set of data, but the whole thing is related, and in time, we will want to cover more business areas, so we will definitely have more "clients".

            1 Reply Last reply
            0
            Reply
            • Reply as topic
            Log in to reply
            • Oldest to Newest
            • Newest to Oldest
            • Most Votes


            • Login

            • Don't have an account? Register

            • Login or register to search.
            • First post
              Last post
            0
            • Categories
            • Recent
            • Tags
            • Popular
            • World
            • Users
            • Groups