WildFly (domain) management in OpenShift V3

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

WildFly (domain) management in OpenShift V3

Thomas Diesler-2
Folks,

I’ve recently been looking at WildFly container deployments on OpenShift V3. The following setup is documented here


The example architecture consists of a set of three high available (HA) servers running REST endpoints. 
For server replication and failover we use Kubernetes. Each server runs in a dedicated Pod that we access via Services.

This approach comes with a number of benefits, which are sufficiently explained in various OpenShiftKubernetes and Docker materials, but also with a number of challenges. Lets look at those in more detail …

In the example above Kubernetes replicates a number of standalone containers and isolates them in a Pod each with limited access from the outside world. 

* The management interfaces are not accessible 
* The management consoles are not visible

With WildFly-Camel we have a Hawt.io console that allows us to manage Camel Routes configured or deployed to the WildFly runtime. 
The WildFly console manages aspects of the appserver.

In a more general sense, I was wondering how the WildFly domain model maps to the Kubernetes runtime environment and how these server instances are managed and information about them relayed back to the sysadmin

a) Should these individual wildfly instances somehow be connected to each other (i.e. notion of domain)?
b) How would an HA singleton service work?
c) What level of management should be exposed to the outside?
d) Should it be possible to modify runtime behaviour of these servers (i.e. write access to config)?
e) Should deployment be supported at all?
f) How can a server be detected that has gone bad?
g) Should logs be aggregated?
h) Should there be a common management view (i.e. console) for these servers?
i) etc …

Are these concerns already being addressed for WildFly? 

Is there perhaps even an already existing design that I could look at?

Can such an effort be connected to the work that is going on in Fabric8? 

cheers
—thomas

PS: it would be area that we @ wildfly-camel were interested to work on
 

_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev
Reply | Threaded
Open this post in threaded view
|

Re: WildFly (domain) management in OpenShift V3

Brian Stansberry
On 12/5/14, 7:36 AM, Thomas Diesler wrote:

> Folks,
>
> I’ve recently been looking at WildFly container deployments on OpenShift
> V3. The following setup is documented here
> <https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/fabric8.md>
>
>
>     The example architecture consists of a set of three high available
>     (HA) servers running REST endpoints.
>     For server replication and failover we use Kubernetes. Each server
>     runs in a dedicated Pod that we access via Services.
>
> This approach comes with a number of benefits, which are sufficiently
> explained in various OpenShift
> <https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernetes-atomic-and-more/>,
> Kubernetes
> <https://github.com/GoogleCloudPlatform/kubernetes/blob/master/README.md> and
> Docker <https://docs.docker.com/> materials, but also with a number of
> challenges. Lets look at those in more detail …
>
> In the example above Kubernetes replicates a number of standalone
> containers and isolates them in a Pod each with limited access from the
> outside world.
>
> * The management interfaces are not accessible
> * The management consoles are not visible
>
> With WildFly-Camel we have a Hawt.io
> <http://wildflyext.gitbooks.io/wildfly-camel/content/features/hawtio.html> console
> that allows us to manage Camel Routes configured or deployed to the
> WildFly runtime.
> The WildFly console manages aspects of the appserver.
>
> In a more general sense, I was wondering how the WildFly domain model
> maps to the Kubernetes runtime environment and how these server
> instances are managed and information about them relayed back to the
> sysadmin
>

Your questions below mostly relate (correctly) to what *should* be done
but I'll preface by discussing what *could* be done. Please forgive noob
mistakes as I'm an admitted Kubernetes noob.

AIUI a Kubernetes services exposes a single endpoint to outside callers,
but the containers in the pods can open an arbitrary number of client
connections to other services.

This should work fine with WildFly domain management, as there can be a
Service for the Domain Controller, which is the management interaction
point for the sysadmin. And then the WildFly instance in the container
for any other Service can connect and register with that Domain
Controller service. The address/port those other containers use can be
the same one that sysadmins use.

> a) Should these individual wildfly instances somehow be connected to
> each other (i.e. notion of domain)?

Depends on the use case, but I expect certainly some users will
centralized management, even if it's just for monitoring.

> b) How would an HA singleton service work?

WildFly *domain management* itself does not have an HA singleton notion, but

i) Kubernetes replication controllers themselves provide a form of this,
but I assume with a period of downtime while a new pod is spun up.

ii) WildFly clustering has an HA singleton service concept that can be
used. There are different mechanisms JGroups supports for group
communication, but one involves each peer in the group connecting to a
central coordination process. So presumably that coordination process
could be deployed as a Kubernetes Service.

> c) What level of management should be exposed to the outside?

As much as possible this should be a user choice. Architecturally, I
believe we can expose everything. I'm not real keen on trying to disable
things in Kubernetes-specific ways. But I'm quite open to features to
disable things that work in any deployment environment.

> d) Should it be possible to modify runtime behaviour of these servers
> (i.e. write access to config)?

See c). We don't have a true read-only mode, athough I think it would be
fairly straightforward to add such a thing if it were a requirement.

> e) Should deployment be supported at all?

See c). Making removing deployment capability configurable is also
doable, although it's likely more work than a simple read-only mode.

> f) How can a server be detected that has gone bad?

I'll need to get a better understanding of Kubernetes to say anything
useful about this.

> g) Should logs be aggregated?

This sounds like something that belongs at a higher layer, or as a
general purpose WildFly feature unrelated to Kubernetes.

> h) Should there be a common management view (i.e. console) for these
> servers?

I don't see why not. I think some users will want that, others won't,
and others will want a console that spans things beyond WildFly servers.

> i) etc …
>
> Are these concerns already being addressed for WildFly?
>

Somewhat. As you can see from the above, a fair bit of stuff could just
work. I know Heiko Braun has been thinking a bit about Kubernetes use
cases too, or at least wanting to do so. ;)

> Is there perhaps even an already existing design that I could look at?
>

Kubernetes specific stuff? No.

> Can such an effort be connected to the work that is going on in Fabric8?
>

The primary Fabric8-related thing we (aka Alexey Loubyansky) are doing
currently is working to support non-xml based persistence of our config
files and a mechanism to support server detection of changes to the
filesystem, triggering updates to the runtime. Goal being to integrate
with the git-based mechanisms Fabric8 uses for configuration.

https://developer.jboss.org/docs/DOC-52773
https://issues.jboss.org/browse/WFCORE-294
https://issues.jboss.org/browse/WFCORE-433

> cheers
> —thomas
>
> PS: it would be area that we @ wildfly-camel were interested to work on

Great! :)

--
Brian Stansberry
Senior Principal Software Engineer
JBoss by Red Hat
_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev
Reply | Threaded
Open this post in threaded view
|

Re: WildFly (domain) management in OpenShift V3

Thomas Diesler-2
Thank Brian, I’d like to do a little more research with wildfly domain mode in openshift before responding. Won’t be long ...

On 5 Dec 2014, at 20:00, Brian Stansberry <[hidden email]> wrote:

On 12/5/14, 7:36 AM, Thomas Diesler wrote:
Folks,

I’ve recently been looking at WildFly container deployments on OpenShift
V3. The following setup is documented here
<https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/fabric8.md>


   The example architecture consists of a set of three high available
   (HA) servers running REST endpoints.
   For server replication and failover we use Kubernetes. Each server
   runs in a dedicated Pod that we access via Services.

This approach comes with a number of benefits, which are sufficiently
explained in various OpenShift
<https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernetes-atomic-and-more/>,
Kubernetes
<https://github.com/GoogleCloudPlatform/kubernetes/blob/master/README.md> and
Docker <https://docs.docker.com/> materials, but also with a number of
challenges. Lets look at those in more detail …

In the example above Kubernetes replicates a number of standalone
containers and isolates them in a Pod each with limited access from the
outside world.

* The management interfaces are not accessible
* The management consoles are not visible

With WildFly-Camel we have a Hawt.io
<http://wildflyext.gitbooks.io/wildfly-camel/content/features/hawtio.html> console
that allows us to manage Camel Routes configured or deployed to the
WildFly runtime.
The WildFly console manages aspects of the appserver.

In a more general sense, I was wondering how the WildFly domain model
maps to the Kubernetes runtime environment and how these server
instances are managed and information about them relayed back to the
sysadmin


Your questions below mostly relate (correctly) to what *should* be done
but I'll preface by discussing what *could* be done. Please forgive noob
mistakes as I'm an admitted Kubernetes noob.

AIUI a Kubernetes services exposes a single endpoint to outside callers,
but the containers in the pods can open an arbitrary number of client
connections to other services.

This should work fine with WildFly domain management, as there can be a
Service for the Domain Controller, which is the management interaction
point for the sysadmin. And then the WildFly instance in the container
for any other Service can connect and register with that Domain
Controller service. The address/port those other containers use can be
the same one that sysadmins use.

a) Should these individual wildfly instances somehow be connected to
each other (i.e. notion of domain)?

Depends on the use case, but I expect certainly some users will
centralized management, even if it's just for monitoring.

b) How would an HA singleton service work?

WildFly *domain management* itself does not have an HA singleton notion, but

i) Kubernetes replication controllers themselves provide a form of this,
but I assume with a period of downtime while a new pod is spun up.

ii) WildFly clustering has an HA singleton service concept that can be
used. There are different mechanisms JGroups supports for group
communication, but one involves each peer in the group connecting to a
central coordination process. So presumably that coordination process
could be deployed as a Kubernetes Service.

c) What level of management should be exposed to the outside?

As much as possible this should be a user choice. Architecturally, I
believe we can expose everything. I'm not real keen on trying to disable
things in Kubernetes-specific ways. But I'm quite open to features to
disable things that work in any deployment environment.

d) Should it be possible to modify runtime behaviour of these servers
(i.e. write access to config)?

See c). We don't have a true read-only mode, athough I think it would be
fairly straightforward to add such a thing if it were a requirement.

e) Should deployment be supported at all?

See c). Making removing deployment capability configurable is also
doable, although it's likely more work than a simple read-only mode.

f) How can a server be detected that has gone bad?

I'll need to get a better understanding of Kubernetes to say anything
useful about this.

g) Should logs be aggregated?

This sounds like something that belongs at a higher layer, or as a
general purpose WildFly feature unrelated to Kubernetes.

h) Should there be a common management view (i.e. console) for these
servers?

I don't see why not. I think some users will want that, others won't,
and others will want a console that spans things beyond WildFly servers.

i) etc …

Are these concerns already being addressed for WildFly?


Somewhat. As you can see from the above, a fair bit of stuff could just
work. I know Heiko Braun has been thinking a bit about Kubernetes use
cases too, or at least wanting to do so. ;)

Is there perhaps even an already existing design that I could look at?


Kubernetes specific stuff? No.

Can such an effort be connected to the work that is going on in Fabric8?


The primary Fabric8-related thing we (aka Alexey Loubyansky) are doing
currently is working to support non-xml based persistence of our config
files and a mechanism to support server detection of changes to the
filesystem, triggering updates to the runtime. Goal being to integrate
with the git-based mechanisms Fabric8 uses for configuration.

https://developer.jboss.org/docs/DOC-52773
https://issues.jboss.org/browse/WFCORE-294
https://issues.jboss.org/browse/WFCORE-433

cheers
—thomas

PS: it would be area that we @ wildfly-camel were interested to work on

Great! :)

--
Brian Stansberry
Senior Principal Software Engineer
JBoss by Red Hat
_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev


_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev
Reply | Threaded
Open this post in threaded view
|

WildFly domain on OpenShift Origin

Thomas Diesler-2
In reply to this post by Thomas Diesler-2
Folks, 

following up on this topic, I worked a little more on WildFly-Camel in Kubernetes/OpenShift. 

These doc pages are targeted for the upcoming 2.1.0 release (01-Feb-2015)


The setup looks like this


We can now manage these individual wildfly nodes. The domain controller (DC) is replicated once, the host definition is replicated three times. 
Theoretically, this means that there is no single point of failure with the domain controller any more - kube would respawn the DC on failure

Here some ideas for improvement …

In a kube env we should be able to swap out containers based on some criteria. It should be possible to define these criteria, emit events based on them create/remove/replace containers automatically. 
Additionally a human should be able to make qualified decisions through a console and create/remove/replace containers easily.
Much of the needed information is in jmx. Heiko told me that there is a project that can push events to influx db - something to look at.

If information display contained in jmx in a console has value (e.g in hawtio) that information must be aggregated and visible for each node. 
Currently, we have a round robin service on 8080 which would show a different hawtio instance on every request - this is nonsense.

I can see a number of high level items: 

#1 a thing that aggregates jmx content - possibly multiple MBeanServers in the DC VM that delegate to respective MBeanServers on other hosts, so that a management client can pickup the info from one service
#2 look at the existing inluxdb thing and research into how to automate the replacement of containers
#3 from the usability perspective, there may need to be an openshift profile in the console(s) because some operations may not make sense in that env

cheers
—thomas

PS: looking forward to an exiting ride in 2015

 
On 5 Dec 2014, at 14:36, Thomas Diesler <[hidden email]> wrote:

Folks,

I’ve recently been looking at WildFly container deployments on OpenShift V3. The following setup is documented here

<example-rest-design.png>

The example architecture consists of a set of three high available (HA) servers running REST endpoints. 
For server replication and failover we use Kubernetes. Each server runs in a dedicated Pod that we access via Services.

This approach comes with a number of benefits, which are sufficiently explained in various OpenShiftKubernetes and Docker materials, but also with a number of challenges. Lets look at those in more detail …

In the example above Kubernetes replicates a number of standalone containers and isolates them in a Pod each with limited access from the outside world. 

* The management interfaces are not accessible 
* The management consoles are not visible

With WildFly-Camel we have a Hawt.io console that allows us to manage Camel Routes configured or deployed to the WildFly runtime. 
The WildFly console manages aspects of the appserver.

In a more general sense, I was wondering how the WildFly domain model maps to the Kubernetes runtime environment and how these server instances are managed and information about them relayed back to the sysadmin

a) Should these individual wildfly instances somehow be connected to each other (i.e. notion of domain)?
b) How would an HA singleton service work?
c) What level of management should be exposed to the outside?
d) Should it be possible to modify runtime behaviour of these servers (i.e. write access to config)?
e) Should deployment be supported at all?
f) How can a server be detected that has gone bad?
g) Should logs be aggregated?
h) Should there be a common management view (i.e. console) for these servers?
i) etc …

Are these concerns already being addressed for WildFly? 

Is there perhaps even an already existing design that I could look at?

Can such an effort be connected to the work that is going on in Fabric8? 

cheers
—thomas

PS: it would be area that we @ wildfly-camel were interested to work on
 
_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev


_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev
Reply | Threaded
Open this post in threaded view
|

Re: WildFly domain on OpenShift Origin

Thomas Diesler-2
/reducing the cc noise

Yes, I was hoping to hear that this has already been thought about. 

Is there a design document for this JMX aggregation? 
What are the possible target environments and functional requirements? 
Would this be reusable in a plain WildFly domain?

cheers
—thomas

On 17 Dec 2014, at 10:35, Rob Davies <[hidden email]> wrote:

Hi Thomas,

it would be great to see this as an example quickstart in fabric8 - then you could pick up the jmx aggregation etc for free :)

17 December 2014 09:28
Folks, 

following up on this topic, I worked a little more on WildFly-Camel in Kubernetes/OpenShift. 

These doc pages are targeted for the upcoming 2.1.0 release (01-Feb-2015)


The setup looks like this


We can now manage these individual wildfly nodes. The domain controller (DC) is replicated once, the host definition is replicated three times. 
Theoretically, this means that there is no single point of failure with the domain controller any more - kube would respawn the DC on failure

Here some ideas for improvement …

In a kube env we should be able to swap out containers based on some criteria. It should be possible to define these criteria, emit events based on them create/remove/replace containers automatically. 
Additionally a human should be able to make qualified decisions through a console and create/remove/replace containers easily.
Much of the needed information is in jmx. Heiko told me that there is a project that can push events to influx db - something to look at.

If information display contained in jmx in a console has value (e.g in hawtio) that information must be aggregated and visible for each node. 
Currently, we have a round robin service on 8080 which would show a different hawtio instance on every request - this is nonsense.

I can see a number of high level items: 

#1 a thing that aggregates jmx content - possibly multiple MBeanServers in the DC VM that delegate to respective MBeanServers on other hosts, so that a management client can pickup the info from one service
#2 look at the existing inluxdb thing and research into how to automate the replacement of containers
#3 from the usability perspective, there may need to be an openshift profile in the console(s) because some operations may not make sense in that env

cheers
—thomas

PS: looking forward to an exiting ride in 2015

<image.jpg>
 

5 December 2014 13:36
Folks,

I’ve recently been looking at WildFly container deployments on OpenShift V3. The following setup is documented here

This approach comes with a number of benefits, which are sufficiently explained in various OpenShiftKubernetes and Docker materials, but also with a number of challenges. Lets look at those in more detail …

In the example above Kubernetes replicates a number of standalone containers and isolates them in a Pod each with limited access from the outside world. 

* The management interfaces are not accessible 
* The management consoles are not visible

With WildFly-Camel we have a Hawt.io console that allows us to manage Camel Routes configured or deployed to the WildFly runtime. 
The WildFly console manages aspects of the appserver.

In a more general sense, I was wondering how the WildFly domain model maps to the Kubernetes runtime environment and how these server instances are managed and information about them relayed back to the sysadmin

a) Should these individual wildfly instances somehow be connected to each other (i.e. notion of domain)?
b) How would an HA singleton service work?
c) What level of management should be exposed to the outside?
d) Should it be possible to modify runtime behaviour of these servers (i.e. write access to config)?
e) Should deployment be supported at all?
f) How can a server be detected that has gone bad?
g) Should logs be aggregated?
h) Should there be a common management view (i.e. console) for these servers?
i) etc …

Are these concerns already being addressed for WildFly? 

Is there perhaps even an already existing design that I could look at?

Can such an effort be connected to the work that is going on in Fabric8? 

cheers
—thomas

PS: it would be area that we @ wildfly-camel were interested to work on
 


_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev
Reply | Threaded
Open this post in threaded view
|

Re: WildFly domain on OpenShift Origin

Brian Stansberry
In reply to this post by Thomas Diesler-2
On 12/17/14, 3:28 AM, Thomas Diesler wrote:

> Folks,
>
> following up on this topic, I worked a little more on WildFly-Camel in
> Kubernetes/OpenShift.
>
> These doc pages are targeted for the upcoming 2.1.0 release (01-Feb-2015)
>
>   * WildFly-Camel on Docker
>     <https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/docker.md>
>   * WildFly-Camel on OpenShift
>     <https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/openshift.md>
>

Great. :)

>
> The setup looks like this
>
>
> We can now manage these individual wildfly nodes. The domain controller
> (DC) is replicated once, the host definition is replicated three times.
> Theoretically, this means that there is no single point of failure with
> the domain controller any more - kube would respawn the DC on failure
>

I'm heading on PTO tomorrow so likely won't be able to follow up on this
question for a while, but one concern I had with the Kubernetes respawn
approach was retaining any changes that had been made to the domain
configuration. Unless the domain.xml comes from / is written to some
shared storage available to the respawned DC, any changes made will be lost.

Of course, if the DC is only being used for reads, this isn't an issue.

> Here some ideas for improvement …
>
> In a kube env we should be able to swap out containers based on some
> criteria. It should be possible to define these criteria, emit events
> based on them create/remove/replace containers automatically.
> Additionally a human should be able to make qualified decisions through
> a console and create/remove/replace containers easily.
> Much of the needed information is in jmx. Heiko told me that there is a
> project that can push events to influx db - something to look at.
>
> If information display contained in jmx in a console has value (e.g in
> hawtio) that information must be aggregated and visible for each node.
> Currently, we have a round robin service on 8080 which would show a
> different hawtio instance on every request - this is nonsense.
>
> I can see a number of high level items:
>
> #1 a thing that aggregates jmx content - possibly multiple MBeanServers
> in the DC VM that delegate to respective MBeanServers on other hosts, so
> that a management client can pickup the info from one service
> #2 look at the existing inluxdb thing and research into how to automate
> the replacement of containers
> #3 from the usability perspective, there may need to be an openshift
> profile in the console(s) because some operations may not make sense in
> that env
>
> cheers
> —thomas
>
> PS: looking forward to an exiting ride in 2015
>
>
>> On 5 Dec 2014, at 14:36, Thomas Diesler <[hidden email]
>> <mailto:[hidden email]>> wrote:
>>
>> Folks,
>>
>> I’ve recently been looking at WildFly container deployments on
>> OpenShift V3. The following setup is documented here
>> <https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/fabric8.md>
>>
>>     <example-rest-design.png>
>>
>>     The example architecture consists of a set of three high available
>>     (HA) servers running REST endpoints.
>>     For server replication and failover we use Kubernetes. Each server
>>     runs in a dedicated Pod that we access via Services.
>>
>> This approach comes with a number of benefits, which are sufficiently
>> explained in various OpenShift
>> <https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernetes-atomic-and-more/>,
>> Kubernetes
>> <https://github.com/GoogleCloudPlatform/kubernetes/blob/master/README.md> and
>> Docker <https://docs.docker.com/> materials, but also with a number of
>> challenges. Lets look at those in more detail …
>>
>> In the example above Kubernetes replicates a number of standalone
>> containers and isolates them in a Pod each with limited access from
>> the outside world.
>>
>> * The management interfaces are not accessible
>> * The management consoles are not visible
>>
>> With WildFly-Camel we have a Hawt.io
>> <http://wildflyext.gitbooks.io/wildfly-camel/content/features/hawtio.html> console
>> that allows us to manage Camel Routes configured or deployed to the
>> WildFly runtime.
>> The WildFly console manages aspects of the appserver.
>>
>> In a more general sense, I was wondering how the WildFly domain model
>> maps to the Kubernetes runtime environment and how these server
>> instances are managed and information about them relayed back to the
>> sysadmin
>>
>> a) Should these individual wildfly instances somehow be connected to
>> each other (i.e. notion of domain)?
>> b) How would an HA singleton service work?
>> c) What level of management should be exposed to the outside?
>> d) Should it be possible to modify runtime behaviour of these servers
>> (i.e. write access to config)?
>> e) Should deployment be supported at all?
>> f) How can a server be detected that has gone bad?
>> g) Should logs be aggregated?
>> h) Should there be a common management view (i.e. console) for these
>> servers?
>> i) etc …
>>
>> Are these concerns already being addressed for WildFly?
>>
>> Is there perhaps even an already existing design that I could look at?
>>
>> Can such an effort be connected to the work that is going on in Fabric8?
>>
>> cheers
>> —thomas
>>
>> PS: it would be area that we @ wildfly-camel were interested to work on
>> _______________________________________________
>> wildfly-dev mailing list
>> [hidden email] <mailto:[hidden email]>
>> https://lists.jboss.org/mailman/listinfo/wildfly-dev
>


--
Brian Stansberry
Senior Principal Software Engineer
JBoss by Red Hat
_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev
Reply | Threaded
Open this post in threaded view
|

Re: WildFly domain on OpenShift Origin

Thomas Diesler-2

> On 17 Dec 2014, at 15:42, Brian Stansberry <[hidden email]> wrote:
>
> On 12/17/14, 3:28 AM, Thomas Diesler wrote:
>> Folks,
>>
>> following up on this topic, I worked a little more on WildFly-Camel in
>> Kubernetes/OpenShift.
>>
>> These doc pages are targeted for the upcoming 2.1.0 release (01-Feb-2015)
>>
>>  * WildFly-Camel on Docker
>>    <https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/docker.md>
>>  * WildFly-Camel on OpenShift
>>    <https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/openshift.md>
>>
>
> Great. :)
>
>>
>> The setup looks like this
>>
>>
>> We can now manage these individual wildfly nodes. The domain controller
>> (DC) is replicated once, the host definition is replicated three times.
>> Theoretically, this means that there is no single point of failure with
>> the domain controller any more - kube would respawn the DC on failure
>>
>
> I'm heading on PTO tomorrow so likely won't be able to follow up on this
> question for a while, but one concern I had with the Kubernetes respawn
> approach was retaining any changes that had been made to the domain
> configuration. Unless the domain.xml comes from / is written to some
> shared storage available to the respawned DC, any changes made will be lost.
>
> Of course, if the DC is only being used for reads, this isn't an issue.

Yes, the management interface would need to detect whether a volume is used and perhaps issue a warning accordingly

>
>> Here some ideas for improvement …
>>
>> In a kube env we should be able to swap out containers based on some
>> criteria. It should be possible to define these criteria, emit events
>> based on them create/remove/replace containers automatically.
>> Additionally a human should be able to make qualified decisions through
>> a console and create/remove/replace containers easily.
>> Much of the needed information is in jmx. Heiko told me that there is a
>> project that can push events to influx db - something to look at.
>>
>> If information display contained in jmx in a console has value (e.g in
>> hawtio) that information must be aggregated and visible for each node.
>> Currently, we have a round robin service on 8080 which would show a
>> different hawtio instance on every request - this is nonsense.
>>
>> I can see a number of high level items:
>>
>> #1 a thing that aggregates jmx content - possibly multiple MBeanServers
>> in the DC VM that delegate to respective MBeanServers on other hosts, so
>> that a management client can pickup the info from one service
>> #2 look at the existing inluxdb thing and research into how to automate
>> the replacement of containers
>> #3 from the usability perspective, there may need to be an openshift
>> profile in the console(s) because some operations may not make sense in
>> that env
>>
>> cheers
>> —thomas
>>
>> PS: looking forward to an exiting ride in 2015
>>
>>
>>> On 5 Dec 2014, at 14:36, Thomas Diesler <[hidden email]
>>> <mailto:[hidden email]>> wrote:
>>>
>>> Folks,
>>>
>>> I’ve recently been looking at WildFly container deployments on
>>> OpenShift V3. The following setup is documented here
>>> <https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/fabric8.md>
>>>
>>>    <example-rest-design.png>
>>>
>>>    The example architecture consists of a set of three high available
>>>    (HA) servers running REST endpoints.
>>>    For server replication and failover we use Kubernetes. Each server
>>>    runs in a dedicated Pod that we access via Services.
>>>
>>> This approach comes with a number of benefits, which are sufficiently
>>> explained in various OpenShift
>>> <https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernetes-atomic-and-more/>,
>>> Kubernetes
>>> <https://github.com/GoogleCloudPlatform/kubernetes/blob/master/README.md> and
>>> Docker <https://docs.docker.com/> materials, but also with a number of
>>> challenges. Lets look at those in more detail …
>>>
>>> In the example above Kubernetes replicates a number of standalone
>>> containers and isolates them in a Pod each with limited access from
>>> the outside world.
>>>
>>> * The management interfaces are not accessible
>>> * The management consoles are not visible
>>>
>>> With WildFly-Camel we have a Hawt.io
>>> <http://wildflyext.gitbooks.io/wildfly-camel/content/features/hawtio.html> console
>>> that allows us to manage Camel Routes configured or deployed to the
>>> WildFly runtime.
>>> The WildFly console manages aspects of the appserver.
>>>
>>> In a more general sense, I was wondering how the WildFly domain model
>>> maps to the Kubernetes runtime environment and how these server
>>> instances are managed and information about them relayed back to the
>>> sysadmin
>>>
>>> a) Should these individual wildfly instances somehow be connected to
>>> each other (i.e. notion of domain)?
>>> b) How would an HA singleton service work?
>>> c) What level of management should be exposed to the outside?
>>> d) Should it be possible to modify runtime behaviour of these servers
>>> (i.e. write access to config)?
>>> e) Should deployment be supported at all?
>>> f) How can a server be detected that has gone bad?
>>> g) Should logs be aggregated?
>>> h) Should there be a common management view (i.e. console) for these
>>> servers?
>>> i) etc …
>>>
>>> Are these concerns already being addressed for WildFly?
>>>
>>> Is there perhaps even an already existing design that I could look at?
>>>
>>> Can such an effort be connected to the work that is going on in Fabric8?
>>>
>>> cheers
>>> —thomas
>>>
>>> PS: it would be area that we @ wildfly-camel were interested to work on
>>> _______________________________________________
>>> wildfly-dev mailing list
>>> [hidden email] <mailto:[hidden email]>
>>> https://lists.jboss.org/mailman/listinfo/wildfly-dev
>>
>
>
> --
> Brian Stansberry
> Senior Principal Software Engineer
> JBoss by Red Hat
> _______________________________________________
> wildfly-dev mailing list
> [hidden email]
> https://lists.jboss.org/mailman/listinfo/wildfly-dev


_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev
Reply | Threaded
Open this post in threaded view
|

Re: WildFly domain on OpenShift Origin

Thomas Diesler-2
In reply to this post by Brian Stansberry
Lets start with requirements and a design that everybody who has a stake in this can be agreed on - I’ll get a doc started.

On 18 Dec 2014, at 09:18, James Strachan <[hidden email]> wrote:

If the EAP console is available as a Kubernetes Service we can easily add it to the hawtio nav bar like we do with Kibana, Grafana et al.

On 17 Dec 2014, at 16:17, Thomas Diesler <[hidden email]> wrote:

Thanks James,

I’ll look at the fabric8 hawtio console next I see if I can get it to work alongside with the wildfly console. Then I think I should meet with Heiko/Harald (for a long walk) and we talk about this some more.

—thomas

<PastedGraphic-1.tiff>

On 17 Dec 2014, at 15:59, James Strachan <[hidden email]> wrote:

A persistent volume could be used for the pod running the DC; if the pod is restarted or if it fails over to another host the persistent volume will be preserved (using one of the shared volume mechanisms in kubernetes/openshift like Ceph/Gluster/Cinder/S3/EBS etc)

On 17 Dec 2014, at 14:42, Brian Stansberry <[hidden email]> wrote:

On 12/17/14, 3:28 AM, Thomas Diesler wrote:
Folks,

following up on this topic, I worked a little more on WildFly-Camel in
Kubernetes/OpenShift.

These doc pages are targeted for the upcoming 2.1.0 release (01-Feb-2015)

* WildFly-Camel on Docker
  <https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/docker.md>
* WildFly-Camel on OpenShift
  <https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/openshift.md>


Great. :)


The setup looks like this


We can now manage these individual wildfly nodes. The domain controller
(DC) is replicated once, the host definition is replicated three times.
Theoretically, this means that there is no single point of failure with
the domain controller any more - kube would respawn the DC on failure


I'm heading on PTO tomorrow so likely won't be able to follow up on this question for a while, but one concern I had with the Kubernetes respawn approach was retaining any changes that had been made to the domain configuration. Unless the domain.xml comes from / is written to some shared storage available to the respawned DC, any changes made will be lost.

Of course, if the DC is only being used for reads, this isn't an issue.

Here some ideas for improvement …

In a kube env we should be able to swap out containers based on some
criteria. It should be possible to define these criteria, emit events
based on them create/remove/replace containers automatically.
Additionally a human should be able to make qualified decisions through
a console and create/remove/replace containers easily.
Much of the needed information is in jmx. Heiko told me that there is a
project that can push events to influx db - something to look at.

If information display contained in jmx in a console has value (e.g in
hawtio) that information must be aggregated and visible for each node.
Currently, we have a round robin service on 8080 which would show a
different hawtio instance on every request - this is nonsense.

I can see a number of high level items:

#1 a thing that aggregates jmx content - possibly multiple MBeanServers
in the DC VM that delegate to respective MBeanServers on other hosts, so
that a management client can pickup the info from one service
#2 look at the existing inluxdb thing and research into how to automate
the replacement of containers
#3 from the usability perspective, there may need to be an openshift
profile in the console(s) because some operations may not make sense in
that env

cheers
—thomas

PS: looking forward to an exiting ride in 2015


On 5 Dec 2014, at 14:36, Thomas Diesler <[hidden email]
<[hidden email]>> wrote:

Folks,

I’ve recently been looking at WildFly container deployments on
OpenShift V3. The following setup is documented here
<https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/fabric8.md>

  <example-rest-design.png>

  The example architecture consists of a set of three high available
  (HA) servers running REST endpoints.
  For server replication and failover we use Kubernetes. Each server
  runs in a dedicated Pod that we access via Services.

This approach comes with a number of benefits, which are sufficiently
explained in various OpenShift
<https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernetes-atomic-and-more/>,
Kubernetes
<https://github.com/GoogleCloudPlatform/kubernetes/blob/master/README.md> and
Docker <https://docs.docker.com/> materials, but also with a number of
challenges. Lets look at those in more detail …

In the example above Kubernetes replicates a number of standalone
containers and isolates them in a Pod each with limited access from
the outside world.

* The management interfaces are not accessible
* The management consoles are not visible

With WildFly-Camel we have a Hawt.io
<http://wildflyext.gitbooks.io/wildfly-camel/content/features/hawtio.html> console
that allows us to manage Camel Routes configured or deployed to the
WildFly runtime.
The WildFly console manages aspects of the appserver.

In a more general sense, I was wondering how the WildFly domain model
maps to the Kubernetes runtime environment and how these server
instances are managed and information about them relayed back to the
sysadmin

a) Should these individual wildfly instances somehow be connected to
each other (i.e. notion of domain)?
b) How would an HA singleton service work?
c) What level of management should be exposed to the outside?
d) Should it be possible to modify runtime behaviour of these servers
(i.e. write access to config)?
e) Should deployment be supported at all?
f) How can a server be detected that has gone bad?
g) Should logs be aggregated?
h) Should there be a common management view (i.e. console) for these
servers?
i) etc …

Are these concerns already being addressed for WildFly?

Is there perhaps even an already existing design that I could look at?

Can such an effort be connected to the work that is going on in Fabric8?

cheers
—thomas

PS: it would be area that we @ wildfly-camel were interested to work on
_______________________________________________
wildfly-dev mailing list
[hidden email] <[hidden email]>
https://lists.jboss.org/mailman/listinfo/wildfly-dev



--
Brian Stansberry
Senior Principal Software Engineer
JBoss by Red Hat


James
-------
Red Hat

Twitter: @jstrachan
Email: [hidden email]
Blog: http://macstrac.blogspot.com/

hawtio: http://hawt.io/
fabric8: http://fabric8.io/

Open Source Integration




James
-------
Red Hat

Twitter: @jstrachan
Email: [hidden email]
Blog: http://macstrac.blogspot.com/

hawtio: http://hawt.io/
fabric8: http://fabric8.io/

Open Source Integration



_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev
Reply | Threaded
Open this post in threaded view
|

Re: WildFly domain on OpenShift Origin

Heiko Braun
Did you already provide a link to that document?

/Heiko




Am 18.12.2014 um 09:26 schrieb Thomas Diesler <[hidden email]>:

Lets start with requirements and a design that everybody who has a stake in this can be agreed on - I’ll get a doc started.

On 18 Dec 2014, at 09:18, James Strachan <[hidden email]> wrote:

If the EAP console is available as a Kubernetes Service we can easily add it to the hawtio nav bar like we do with Kibana, Grafana et al.

On 17 Dec 2014, at 16:17, Thomas Diesler <[hidden email]> wrote:

Thanks James,

I’ll look at the fabric8 hawtio console next I see if I can get it to work alongside with the wildfly console. Then I think I should meet with Heiko/Harald (for a long walk) and we talk about this some more.

—thomas

<PastedGraphic-1.tiff>

On 17 Dec 2014, at 15:59, James Strachan <[hidden email]> wrote:

A persistent volume could be used for the pod running the DC; if the pod is restarted or if it fails over to another host the persistent volume will be preserved (using one of the shared volume mechanisms in kubernetes/openshift like Ceph/Gluster/Cinder/S3/EBS etc)

On 17 Dec 2014, at 14:42, Brian Stansberry <[hidden email]> wrote:

On 12/17/14, 3:28 AM, Thomas Diesler wrote:
Folks,

following up on this topic, I worked a little more on WildFly-Camel in
Kubernetes/OpenShift.

These doc pages are targeted for the upcoming 2.1.0 release (01-Feb-2015)

* WildFly-Camel on Docker
  <https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/docker.md>
* WildFly-Camel on OpenShift
  <https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/openshift.md>


Great. :)


The setup looks like this


We can now manage these individual wildfly nodes. The domain controller
(DC) is replicated once, the host definition is replicated three times.
Theoretically, this means that there is no single point of failure with
the domain controller any more - kube would respawn the DC on failure


I'm heading on PTO tomorrow so likely won't be able to follow up on this question for a while, but one concern I had with the Kubernetes respawn approach was retaining any changes that had been made to the domain configuration. Unless the domain.xml comes from / is written to some shared storage available to the respawned DC, any changes made will be lost.

Of course, if the DC is only being used for reads, this isn't an issue.

Here some ideas for improvement …

In a kube env we should be able to swap out containers based on some
criteria. It should be possible to define these criteria, emit events
based on them create/remove/replace containers automatically.
Additionally a human should be able to make qualified decisions through
a console and create/remove/replace containers easily.
Much of the needed information is in jmx. Heiko told me that there is a
project that can push events to influx db - something to look at.

If information display contained in jmx in a console has value (e.g in
hawtio) that information must be aggregated and visible for each node.
Currently, we have a round robin service on 8080 which would show a
different hawtio instance on every request - this is nonsense.

I can see a number of high level items:

#1 a thing that aggregates jmx content - possibly multiple MBeanServers
in the DC VM that delegate to respective MBeanServers on other hosts, so
that a management client can pickup the info from one service
#2 look at the existing inluxdb thing and research into how to automate
the replacement of containers
#3 from the usability perspective, there may need to be an openshift
profile in the console(s) because some operations may not make sense in
that env

cheers
—thomas

PS: looking forward to an exiting ride in 2015


On 5 Dec 2014, at 14:36, Thomas Diesler <[hidden email]
<[hidden email]>> wrote:

Folks,

I’ve recently been looking at WildFly container deployments on
OpenShift V3. The following setup is documented here
<https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/fabric8.md>

  <example-rest-design.png>

  The example architecture consists of a set of three high available
  (HA) servers running REST endpoints.
  For server replication and failover we use Kubernetes. Each server
  runs in a dedicated Pod that we access via Services.

This approach comes with a number of benefits, which are sufficiently
explained in various OpenShift
<https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernetes-atomic-and-more/>,
Kubernetes
<https://github.com/GoogleCloudPlatform/kubernetes/blob/master/README.md> and
Docker <https://docs.docker.com/> materials, but also with a number of
challenges. Lets look at those in more detail …

In the example above Kubernetes replicates a number of standalone
containers and isolates them in a Pod each with limited access from
the outside world.

* The management interfaces are not accessible
* The management consoles are not visible

With WildFly-Camel we have a Hawt.io
<http://wildflyext.gitbooks.io/wildfly-camel/content/features/hawtio.html> console
that allows us to manage Camel Routes configured or deployed to the
WildFly runtime.
The WildFly console manages aspects of the appserver.

In a more general sense, I was wondering how the WildFly domain model
maps to the Kubernetes runtime environment and how these server
instances are managed and information about them relayed back to the
sysadmin

a) Should these individual wildfly instances somehow be connected to
each other (i.e. notion of domain)?
b) How would an HA singleton service work?
c) What level of management should be exposed to the outside?
d) Should it be possible to modify runtime behaviour of these servers
(i.e. write access to config)?
e) Should deployment be supported at all?
f) How can a server be detected that has gone bad?
g) Should logs be aggregated?
h) Should there be a common management view (i.e. console) for these
servers?
i) etc …

Are these concerns already being addressed for WildFly?

Is there perhaps even an already existing design that I could look at?

Can such an effort be connected to the work that is going on in Fabric8?

cheers
—thomas

PS: it would be area that we @ wildfly-camel were interested to work on
_______________________________________________
wildfly-dev mailing list
[hidden email] <[hidden email]>
https://lists.jboss.org/mailman/listinfo/wildfly-dev



--
Brian Stansberry
Senior Principal Software Engineer
JBoss by Red Hat


James
-------
Red Hat

Twitter: @jstrachan
Email: [hidden email]
Blog: http://macstrac.blogspot.com/

hawtio: http://hawt.io/
fabric8: http://fabric8.io/

Open Source Integration




James
-------
Red Hat

Twitter: @jstrachan
Email: [hidden email]
Blog: http://macstrac.blogspot.com/

hawtio: http://hawt.io/
fabric8: http://fabric8.io/

Open Source Integration



_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev
Reply | Threaded
Open this post in threaded view
|

Re: WildFly domain on OpenShift Origin

Thomas Diesler-2
what I have is here


it runs a wildfly domain on OS and accesses the management interface from the EC2 instance. Due to a public port exposure bug in kubernetes we can’t see the web console yet, but that should be fixed sonn (if it isn’t already).

cheers
—thomas

On 7 Jan 2015, at 15:13, Heiko Braun <[hidden email]> wrote:

Did you already provide a link to that document?

/Heiko




Am 18.12.2014 um 09:26 schrieb Thomas Diesler <[hidden email]>:

Lets start with requirements and a design that everybody who has a stake in this can be agreed on - I’ll get a doc started.

On 18 Dec 2014, at 09:18, James Strachan <[hidden email]> wrote:

If the EAP console is available as a Kubernetes Service we can easily add it to the hawtio nav bar like we do with Kibana, Grafana et al.

On 17 Dec 2014, at 16:17, Thomas Diesler <[hidden email]> wrote:

Thanks James,

I’ll look at the fabric8 hawtio console next I see if I can get it to work alongside with the wildfly console. Then I think I should meet with Heiko/Harald (for a long walk) and we talk about this some more.

—thomas

<PastedGraphic-1.tiff>

On 17 Dec 2014, at 15:59, James Strachan <[hidden email]> wrote:

A persistent volume could be used for the pod running the DC; if the pod is restarted or if it fails over to another host the persistent volume will be preserved (using one of the shared volume mechanisms in kubernetes/openshift like Ceph/Gluster/Cinder/S3/EBS etc)

On 17 Dec 2014, at 14:42, Brian Stansberry <[hidden email]> wrote:

On 12/17/14, 3:28 AM, Thomas Diesler wrote:
Folks,

following up on this topic, I worked a little more on WildFly-Camel in
Kubernetes/OpenShift.

These doc pages are targeted for the upcoming 2.1.0 release (01-Feb-2015)

* WildFly-Camel on Docker
  <https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/docker.md>
* WildFly-Camel on OpenShift
  <https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/openshift.md>


Great. :)


The setup looks like this


We can now manage these individual wildfly nodes. The domain controller
(DC) is replicated once, the host definition is replicated three times.
Theoretically, this means that there is no single point of failure with
the domain controller any more - kube would respawn the DC on failure


I'm heading on PTO tomorrow so likely won't be able to follow up on this question for a while, but one concern I had with the Kubernetes respawn approach was retaining any changes that had been made to the domain configuration. Unless the domain.xml comes from / is written to some shared storage available to the respawned DC, any changes made will be lost.

Of course, if the DC is only being used for reads, this isn't an issue.

Here some ideas for improvement …

In a kube env we should be able to swap out containers based on some
criteria. It should be possible to define these criteria, emit events
based on them create/remove/replace containers automatically.
Additionally a human should be able to make qualified decisions through
a console and create/remove/replace containers easily.
Much of the needed information is in jmx. Heiko told me that there is a
project that can push events to influx db - something to look at.

If information display contained in jmx in a console has value (e.g in
hawtio) that information must be aggregated and visible for each node.
Currently, we have a round robin service on 8080 which would show a
different hawtio instance on every request - this is nonsense.

I can see a number of high level items:

#1 a thing that aggregates jmx content - possibly multiple MBeanServers
in the DC VM that delegate to respective MBeanServers on other hosts, so
that a management client can pickup the info from one service
#2 look at the existing inluxdb thing and research into how to automate
the replacement of containers
#3 from the usability perspective, there may need to be an openshift
profile in the console(s) because some operations may not make sense in
that env

cheers
—thomas

PS: looking forward to an exiting ride in 2015


On 5 Dec 2014, at 14:36, Thomas Diesler <[hidden email]
<[hidden email]>> wrote:

Folks,

I’ve recently been looking at WildFly container deployments on
OpenShift V3. The following setup is documented here
<https://github.com/wildfly-extras/wildfly-camel-book/blob/2.1/cloud/fabric8.md>

  <example-rest-design.png>

  The example architecture consists of a set of three high available
  (HA) servers running REST endpoints.
  For server replication and failover we use Kubernetes. Each server
  runs in a dedicated Pod that we access via Services.

This approach comes with a number of benefits, which are sufficiently
explained in various OpenShift
<https://blog.openshift.com/openshift-v3-platform-combines-docker-kubernetes-atomic-and-more/>,
Kubernetes
<https://github.com/GoogleCloudPlatform/kubernetes/blob/master/README.md> and
Docker <https://docs.docker.com/> materials, but also with a number of
challenges. Lets look at those in more detail …

In the example above Kubernetes replicates a number of standalone
containers and isolates them in a Pod each with limited access from
the outside world.

* The management interfaces are not accessible
* The management consoles are not visible

With WildFly-Camel we have a Hawt.io
<http://wildflyext.gitbooks.io/wildfly-camel/content/features/hawtio.html> console
that allows us to manage Camel Routes configured or deployed to the
WildFly runtime.
The WildFly console manages aspects of the appserver.

In a more general sense, I was wondering how the WildFly domain model
maps to the Kubernetes runtime environment and how these server
instances are managed and information about them relayed back to the
sysadmin

a) Should these individual wildfly instances somehow be connected to
each other (i.e. notion of domain)?
b) How would an HA singleton service work?
c) What level of management should be exposed to the outside?
d) Should it be possible to modify runtime behaviour of these servers
(i.e. write access to config)?
e) Should deployment be supported at all?
f) How can a server be detected that has gone bad?
g) Should logs be aggregated?
h) Should there be a common management view (i.e. console) for these
servers?
i) etc …

Are these concerns already being addressed for WildFly?

Is there perhaps even an already existing design that I could look at?

Can such an effort be connected to the work that is going on in Fabric8?

cheers
—thomas

PS: it would be area that we @ wildfly-camel were interested to work on
_______________________________________________
wildfly-dev mailing list
[hidden email] <[hidden email]>
https://lists.jboss.org/mailman/listinfo/wildfly-dev



--
Brian Stansberry
Senior Principal Software Engineer
JBoss by Red Hat


James
-------
Red Hat

Twitter: @jstrachan
Email: [hidden email]
Blog: http://macstrac.blogspot.com/

hawtio: http://hawt.io/
fabric8: http://fabric8.io/

Open Source Integration




James
-------
Red Hat

Twitter: @jstrachan
Email: [hidden email]
Blog: http://macstrac.blogspot.com/

hawtio: http://hawt.io/
fabric8: http://fabric8.io/

Open Source Integration




_______________________________________________
wildfly-dev mailing list
[hidden email]
https://lists.jboss.org/mailman/listinfo/wildfly-dev