Subsystems changing their own configuration model

classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

Subsystems changing their own configuration model

Tristan Tarrant
We would like to allow a subsystem to modify its configuration model by
itself at runtime. The use case for this is that we would like to let a
remote Hot Rod client create a cache on an Infinispan server and making
that configuration persistent across restarts. I understand this is
creating management ops which are orthogonal to the server's.

I guess that in standalone mode this wouldn't be too much of an issue,
with two caveats:
- all nodes in a cluster should apply the changes to their own
configuration, leveraging the model rollback mechanism to handle
failures on other nodes
- new nodes joining the cluster (and therefore with a possibly outdated
configuration) would receive the configuration of caches already running
in the cluster and applying them locally

The real tricky bit is obviously domain mode. The server receiving the
cache creation request would need to delegate it to the DC who would
then apply it across the profile. However this clashes with the fact
that, as far as I know, there is no way for a server to communicate with
its DC. Is this type of functionality planned ?

I have created a document which describes what we'd like to do at [1]

Tristan


[1] https://github.com/infinispan/infinispan/wiki/Create-Cache-over-HotRod

--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev
Reply | Threaded
Open this post in threaded view
|

Re: Subsystems changing their own configuration model

Brian Stansberry

> On Sep 13, 2016, at 5:46 AM, Tristan Tarrant <[hidden email]> wrote:
>
> We would like to allow a subsystem to modify its configuration model by
> itself at runtime. The use case for this is that we would like to let a
> remote Hot Rod client create a cache on an Infinispan server and making
> that configuration persistent across restarts. I understand this is
> creating management ops which are orthogonal to the server’s.
>

There is nothing fundamental that says you can’t do this so long as you’re not trying to execute these ops as part of the start/stop of an MSC service. It doesn’t sound like that’s the case.

It does sound though like you want to expose management over the non-management interface. That’s a significant security hole.


> I guess that in standalone mode this wouldn't be too much of an issue,
> with two caveats:
> - all nodes in a cluster should apply the changes to their own
> configuration, leveraging the model rollback mechanism to handle
> failures on other nodes

There is no multi-process model rollback mechanism with standalone servers.

> - new nodes joining the cluster (and therefore with a possibly outdated
> configuration) would receive the configuration of caches already running
> in the cluster and applying them locally
>

How does this happen?

> The real tricky bit is obviously domain mode. The server receiving the
> cache creation request would need to delegate it to the DC who would
> then apply it across the profile. However this clashes with the fact
> that, as far as I know, there is no way for a server to communicate with
> its DC. Is this type of functionality planned ?
>

Not actively. We looked into it a bit in the context of DOMAIN_PING but for that use case it became apparent that invoking a bunch of management ops against the DC was a non-scalable solution. This sounds different; a server would need to figure out what profile stores the infinispan subsystem config (it may not be the final one associated with the server group since profile x can include profile y) and then make a change to that profile. That’s scalable.

If we set up this kind of connection we’d need to ensure the caller’s security context propagates. Having an external request come in to a server and then get treated by the DC as if it were from a trusted caller like a slave HC or server would be bad.

> I have created a document which describes what we'd like to do at [1]
>
> Tristan
>
>
> [1] https://github.com/infinispan/infinispan/wiki/Create-Cache-over-HotRod
>
> --
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> _______________________________________________
> wildfly-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/wildfly-dev

--
Brian Stansberry
Manager, Senior Principal Software Engineer
JBoss by Red Hat




_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev
Reply | Threaded
Open this post in threaded view
|

Re: Subsystems changing their own configuration model

Tristan Tarrant
On 13/09/16 22:40, Brian Stansberry wrote:
>>
>
> There is nothing fundamental that says you can’t do this so long as you’re not trying to execute these ops as part of the start/stop of an MSC service. It doesn’t sound like that’s the case.

Indeed.

> It does sound though like you want to expose management over the non-management interface. That’s a significant security hole.

Well, it is a protocol operation which has a management side-effect. The
way we have approached that in other similar situations is to either
require access through a loopback interface or require authentication
and authorization be enabled on the endpoint and an Admin permission on
the subject requesting the operation. Note however that the Hot Rod
endpoint would be using a different security realm compared to the
management one.

>> I guess that in standalone mode this wouldn't be too much of an issue,
>> with two caveats:
>> - all nodes in a cluster should apply the changes to their own
>> configuration, leveraging the model rollback mechanism to handle
>> failures on other nodes
>
> There is no multi-process model rollback mechanism with standalone servers.

I know: this would have to be implemented by the subsystem using the
cluster transport.

>> - new nodes joining the cluster (and therefore with a possibly outdated
>> configuration) would receive the configuration of caches already running
>> in the cluster and applying them locally
>>
>
> How does this happen?

Again, via the cluster transport.

>> The real tricky bit is obviously domain mode. The server receiving the
>> cache creation request would need to delegate it to the DC who would
>> then apply it across the profile. However this clashes with the fact
>> that, as far as I know, there is no way for a server to communicate with
>> its DC. Is this type of functionality planned ?
>>
>
> Not actively. We looked into it a bit in the context of DOMAIN_PING but for that use case it became apparent that invoking a bunch of management ops against the DC was a non-scalable solution. This sounds different; a server would need to figure out what profile stores the infinispan subsystem config (it may not be the final one associated with the server group since profile x can include profile y) and then make a change to that profile. That’s scalable.
>
> If we set up this kind of connection we’d need to ensure the caller’s security context propagates. Having an external request come in to a server and then get treated by the DC as if it were from a trusted caller like a slave HC or server would be bad.

As described above, the caller might not be in the same security realm
as the management stuff.


>> I have created a document which describes what we'd like to do at [1]
>>
>> Tristan
>>
>>
>> [1] https://github.com/infinispan/infinispan/wiki/Create-Cache-over-HotRod
>>
>> --
>> Tristan Tarrant
>> Infinispan Lead
>> JBoss, a division of Red Hat
>> _______________________________________________
>> wildfly-dev mailing list
>> [hidden email]
>> https://lists.jboss.org/mailman/listinfo/wildfly-dev
>

--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev
Reply | Threaded
Open this post in threaded view
|

Re: Subsystems changing their own configuration model

Tomaž Cerar-2

On Wed, Sep 14, 2016 at 10:54 AM, Tristan Tarrant <[hidden email]> wrote:
>> - new nodes joining the cluster (and therefore with a possibly outdated
>> configuration) would receive the configuration of caches already running
>> in the cluster and applying them locally
>>
>
> How does this happen?

Again, via the cluster transport.


How can his be? Isn't the cache configuration managed by mgmt configuration.
so as such would always be "latest" or does cache change its own configuration independently?

_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev
Reply | Threaded
Open this post in threaded view
|

Re: Subsystems changing their own configuration model

Darran Lofthouse
In reply to this post by Tristan Tarrant
On 14/09/16 09:54, Tristan Tarrant wrote:

> On 13/09/16 22:40, Brian Stansberry wrote:
>>>
>>
>> There is nothing fundamental that says you can’t do this so long as you’re not trying to execute these ops as part of the start/stop of an MSC service. It doesn’t sound like that’s the case.
>
> Indeed.
>
>> It does sound though like you want to expose management over the non-management interface. That’s a significant security hole.
>
> Well, it is a protocol operation which has a management side-effect. The
> way we have approached that in other similar situations is to either
> require access through a loopback interface or require authentication
> and authorization be enabled on the endpoint and an Admin permission on
> the subject requesting the operation. Note however that the Hot Rod
> endpoint would be using a different security realm compared to the
> management one.

FYI for WildFly 11 if a call remains in-VM and goes from the application
to the management tier we will have a mechanism for the identity to be
inflowed to the security domain used for management which will allow
management access control to be used.

>>> I guess that in standalone mode this wouldn't be too much of an issue,
>>> with two caveats:
>>> - all nodes in a cluster should apply the changes to their own
>>> configuration, leveraging the model rollback mechanism to handle
>>> failures on other nodes
>>
>> There is no multi-process model rollback mechanism with standalone servers.
>
> I know: this would have to be implemented by the subsystem using the
> cluster transport.
>
>>> - new nodes joining the cluster (and therefore with a possibly outdated
>>> configuration) would receive the configuration of caches already running
>>> in the cluster and applying them locally
>>>
>>
>> How does this happen?
>
> Again, via the cluster transport.
>
>>> The real tricky bit is obviously domain mode. The server receiving the
>>> cache creation request would need to delegate it to the DC who would
>>> then apply it across the profile. However this clashes with the fact
>>> that, as far as I know, there is no way for a server to communicate with
>>> its DC. Is this type of functionality planned ?
>>>
>>
>> Not actively. We looked into it a bit in the context of DOMAIN_PING but for that use case it became apparent that invoking a bunch of management ops against the DC was a non-scalable solution. This sounds different; a server would need to figure out what profile stores the infinispan subsystem config (it may not be the final one associated with the server group since profile x can include profile y) and then make a change to that profile. That’s scalable.
>>
>> If we set up this kind of connection we’d need to ensure the caller’s security context propagates. Having an external request come in to a server and then get treated by the DC as if it were from a trusted caller like a slave HC or server would be bad.
>
> As described above, the caller might not be in the same security realm
> as the management stuff.
>
>
>>> I have created a document which describes what we'd like to do at [1]
>>>
>>> Tristan
>>>
>>>
>>> [1] https://github.com/infinispan/infinispan/wiki/Create-Cache-over-HotRod
>>>
>>> --
>>> Tristan Tarrant
>>> Infinispan Lead
>>> JBoss, a division of Red Hat
>>> _______________________________________________
>>> wildfly-dev mailing list
>>> [hidden email]
>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev
>>
>
_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev
Reply | Threaded
Open this post in threaded view
|

Re: Subsystems changing their own configuration model

Tristan Tarrant
In reply to this post by Tomaž Cerar-2
On 14/09/16 11:19, Tomaž Cerar wrote:

>
> On Wed, Sep 14, 2016 at 10:54 AM, Tristan Tarrant <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     >> - new nodes joining the cluster (and therefore with a possibly outdated
>     >> configuration) would receive the configuration of caches already running
>     >> in the cluster and applying them locally
>     >>
>     >
>     > How does this happen?
>
>     Again, via the cluster transport.
>
>
>
> How can his be? Isn't the cache configuration managed by mgmt configuration.
> so as such would always be "latest" or does cache change its own
> configuration independently?

This would be limited to creating or removing a cache using an existing
configuration template. Example:

I have a cluster of two standalone nodes A and B with a cache X running.
A client requests the creation of a cache Y and both A and B add that.
Node C which joins the cluster later will also need to start cache X.

You are probably going to tell me that this is something Domain
management should do, or some external configuration provisioning system
(e.g. Ansible). I'm exploring the various possibilities here to see what
can be done.

Tristan
--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev
Reply | Threaded
Open this post in threaded view
|

Re: Subsystems changing their own configuration model

Tristan Tarrant
In reply to this post by Darran Lofthouse
On 14/09/16 11:24, Darran Lofthouse wrote:

> On 14/09/16 09:54, Tristan Tarrant wrote:
>> Well, it is a protocol operation which has a management side-effect. The
>> way we have approached that in other similar situations is to either
>> require access through a loopback interface or require authentication
>> and authorization be enabled on the endpoint and an Admin permission on
>> the subject requesting the operation. Note however that the Hot Rod
>> endpoint would be using a different security realm compared to the
>> management one.
> FYI for WildFly 11 if a call remains in-VM and goes from the application
> to the management tier we will have a mechanism for the identity to be
> inflowed to the security domain used for management which will allow
> management access control to be used.
That would require the identity to be present in both "security realms"
(or whatever their equivalent is in WF11) ?

Tristan

--
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat

_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev
Reply | Threaded
Open this post in threaded view
|

Re: Subsystems changing their own configuration model

Darran Lofthouse
On 14/09/16 10:37, Tristan Tarrant wrote:

> On 14/09/16 11:24, Darran Lofthouse wrote:
>> On 14/09/16 09:54, Tristan Tarrant wrote:
>>> Well, it is a protocol operation which has a management side-effect. The
>>> way we have approached that in other similar situations is to either
>>> require access through a loopback interface or require authentication
>>> and authorization be enabled on the endpoint and an Admin permission on
>>> the subject requesting the operation. Note however that the Hot Rod
>>> endpoint would be using a different security realm compared to the
>>> management one.
>> FYI for WildFly 11 if a call remains in-VM and goes from the application
>> to the management tier we will have a mechanism for the identity to be
>> inflowed to the security domain used for management which will allow
>> management access control to be used.
> That would require the identity to be present in both "security realms"
> (or whatever their equivalent is in WF11) ?

Generally yes - but there is quite a bit more to it.  A security domain
can reference multiple security realms, in addition to this there are
ways to structure the new configuration so that identity does not have
direct access to the management tier.

Also the identity will look very different depending on which tier it is
in as each tier will have it's own security domain with it's own
configuration for role and permission mapping.

> Tristan
>
_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev