-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GSoSD Review Discussion Point 9: Should Core systems be useful on their own, or can they depend on each other directly? #62
Comments
@jerkerdelsing, @emanuelpalm, @PerOlofsson-Sinetiq Please check whether the description of the issue is properly covering what this discussion is about. |
@rbocsi Your summary is good. Thank you! By the way, I'm going to open a tracking issue for all the GSoSD review points, in case you were wondering why I renamed the issue. |
If this approach is taken, the Orchestration system is not required to even be aware of what kind of data is in the provider-service part of the consumer-provider-service triplets you are mentioning. Whoever is using the Orchestration system would then simply be assumed to know how to interpret their contents. The downside to the Orchestrator not being aware about what these fields contain is, of course, that it cannot validate them. I can accept either of these two variants of A (not validating/opaque vs. validating/transparent). As I assume most use cases to be better served by increased correctness, I vote for the validating approach. |
Statement from AITIA: We are on the side of the approach B because
We believe that the fewer requirements we place on consumers/service providers, the more Arrowhead Framework will be accepted. |
I think you hit the hammer right on the nail right here, so to speak. What are the design objectives, or requirements, on the Core systems? If they are to provide the highest possible level of performance and convenience to the API consumer, as you seem to imply, then B does indeed look lite the more attractive option. However, performance and convenience are important, but they are not at the top of the list. Use CasesFirstly, what kinds of use cases is Arrowhead intended for? I think an illustrative one is the smart rock bolts of ThingWave @jenseliasson. In that use case, many of the Arrowhead systems are running on smaller and quite resource constrained devices. The use case addresses a very particular niche, sensing rock deformation in mines and tunnels, which showcases the point I will get to quite well. The primary kind of use case for Arrowhead is industrial production, whether that is mining, refining steel, building engines, producing paper, pumping oil, producing chemicals, or any of the many and varied ways in which things are produced industrially. Arrowhead is meant to be running on PLC:s, welding robots, drones, intermittent sensors, as well as more traditional servers, with varying compute capacity, deployed near or far away from the manufacturing site or sites. Secondary use cases for Arrowhead are other scenarios where similar devices are used, or similar constraints apply, such as in smart homes, offices and other facilities; military and aerospace solutions, including drones, fighter-jets, tanks, satellites and space ships; as well as construction solutions, health-care systems and transportation systems. You could use Arrowhead to build traditional Internet-systems, but I would argue that is perhaps not even a secondary kind of use case. RequirementsSo where do these use cases bring us? Adaptability matters a lot. We have to make very few assumptions about what kind of computational resources are available to those hosting or using the Arrowhead Core systems. We also have to make few assumptions about their use cases, which means that we do not know what kind of features offered by Arrowhead will be relevant in a given scenario. Another very core requirement is to avoid introducing single points of failure. It must be possible to design an Arrowhead system-of-systems such that it can recover from individual systems failing. As we cannot assume that there is a data center full of virtual machines we can spin up at will, recovery may look very different from use case to use case. There should, of course, be more requirements, but let's get to my point. The Case for AArrowhead should be a toolkit, something you turn to to design custom solutions for very unique problems. If we make the systems depend on each other too much, we are sacrificing adaptability by making too many assumptions about what the users of Arrowhead need. |
We never really understand why should the Arrowhead Framework (core and support systems) be deployed on edge devices. We mean that the current implementation of the Arrowhead Framework doesn't really need too much resources (such as data centers), just an ordinary computer with a minimal OS or even a Raspberry Pie or something similar. We thought that only the application systems (consumers and providers) have these kind of restrictions. If this assumption is correct that means the consumers/providers should be simple as possible not the core and support systems.
There is a single point of failure in both approaches: the Service Registry. In approach A, the Orchestration returns such data that can't be interpreted without the SR.
We think this requirement can be fulfilled if the Arrowhead systems can automatically restart when necessary and they are stateless or store any state in a permanent storage.
We agree with this concept so here is our suggestion. Maybe we can implement two orchestration systems (of course with different names). The first one follows approach A and provides a simple functionality. The second one follows approach B with more functionalities and convenient usage but with dependencies on other systems. And we can let the administrator of the local cloud to choose. |
First of all, I would like to extend the discussion to all Core systems and not focus on the Orchestration system only. The fundamental question is whether the Core systems should be allowed to depend on other Core systems to operate. I think the short answer is Yes, that is okay. There is however some fundamental behaviour/requirements that are put at risk when creating/implementing systems that depend on other systems. In an ideal Arrowhead system-of-systems all participating systems are completely independent of each other, assuring loose coupling at all times. The systems are performing a well defined task and often quite simple. Often resembling what the industry calls a "microservice". These systems are combined by the choreographer of the process to perform a task, do their work, become disassembled and reused in another situation. This also implies the possibility to exchange a system for another system seamlessly, given that the service interfaces are supported. |
As to comment the discussion regarding the Orchestration system: I think that it is perfectly okay to have several implementations of an orchestration system, as long as we stick to the principle of loosely coupling to dependent systems. I think that Arrowhead should encourage different implementations in order to give a diversity of components for any user to use. Also, given more components to select among, the Arrowhead framework becomes more resilient. |
If you are a small fleet of satellites or autonomous submarines, then there may be no other options than to have a rather constrained system host the service registry. These may be pretty esoteric examples, I admit. But then there is also the cost issue. The smart bolt scenario I mentioned before may benefit from local service registries, and the cheaper hardware can host them the better.
Yes, of course. But avoiding having more of them than strictly necessary should be a priority. But I think this point is a bit out-of-scope for this discussion. I shouldn't have brought it up. I think @PerOlofsson-Sinetiq's comment that we want many different kinds of implementations to be spot on. We should strive for services interfaces to be as simple as possible, making it more feasible to have many different implementations. |
AITIA view
Yes and that is the key we think. A reference implementation should cover most of the use cases and there should be targeted implementations for edge cases as well. In this spirit the core services should be designed the way that everyone would be able to establish an arrowhead cloud according to its specific needs. A good example for this that we are on the same page regading to the orchestartion: "having several implementations of an orchestration system".
Regarding to the ServiceRegistry and Authorization Core systems there is no concerns that they have to be useful on their own. |
Final conclusion need to be harmonised with Point 1 & 5 & 9 |
Raymond to summarise the discussion into the GSoSD. |
Introduced by Raymond into the GSoSD |
This issue came up during the Roadmap meetings where we were discussing the GSoSD document for the Arrowhead 5.0. See details here: #54.
There are two competitive approaches have arisen.
A. The first one states that each core system should work on its own without the assistance of any other core systems.
B. The second one states that there are features that can't work without the assistance of any other core systems.
For example, the Orchestration core system.
A. In the first approach, the Orchestration core system should maintain a table that contains consumer-provider-service triplets. When a consumer asks for orchestration, the core system just returns the relevant records. Then, the consumer should use this information to contact the Service Registry to acquire the necessary access information for the required providers.
B. In the second approach, the Orchestration core system (besides maintaining a table) should always consult the Service Registry on behalf of the consumer and in its response return everything what is needed for the consumer to perform a service consumption.
The text was updated successfully, but these errors were encountered: