Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ceph-OSD errors #49

Open
glzavert opened this issue Jun 19, 2018 · 9 comments
Open

Ceph-OSD errors #49

glzavert opened this issue Jun 19, 2018 · 9 comments

Comments

@glzavert
Copy link

This install was working but now it has stopped. Everytime I install the ceph-osd'd error.

results of "ceph status" on the osd's

2018-06-19 14:27:57.194686 7fd4c4883700 -1 Errors while parsing config file!
2018-06-19 14:27:57.194733 7fd4c4883700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2018-06-19 14:27:57.194735 7fd4c4883700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2018-06-19 14:27:57.194736 7fd4c4883700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)

@glzavert
Copy link
Author

I have tried to install this stack 6 times with minor modifications here and there. the ceph-osd does not install.
output of juju status

Model Controller Cloud/Region Version SLA
default lxd localhost/localhost 2.3.8 unsupported

App Version Status Scale Charm Store Rev OS Notes
ceilometer 10.0.0 waiting 1 ceilometer jujucharms 253 ubuntu
ceilometer-agent 10.0.0 active 2 ceilometer-agent jujucharms 244 ubuntu
ceph-mon 12.2.4 active 3 ceph-mon jujucharms 25 ubuntu
ceph-osd error 6 ceph-osd jujucharms 262 ubuntu
ceph-radosgw 12.2.4 active 1 ceph-radosgw jujucharms 258 ubuntu
cinder 12.0.1 active 1 cinder jujucharms 272 ubuntu
cinder-ceph 12.0.1 active 1 cinder-ceph jujucharms 233 ubuntu
designate 6.0.1 active 1 designate jujucharms 19 ubuntu
designate-bind 9.11.3+dfsg active 1 designate-bind jujucharms 13 ubuntu
glance 16.0.1 active 1 glance jujucharms 265 ubuntu
gnocchi 4.2.4 error 1 gnocchi jujucharms 8 ubuntu
heat 10.0.1 active 1 heat jujucharms 252 ubuntu
keystone 13.0.0 active 1 keystone jujucharms 281 ubuntu
lxd 3.0.0 active 1 lxd jujucharms 18 ubuntu
memcached unknown 1 memcached jujucharms 21 ubuntu
mysql 5.7.20-29.24 active 1 percona-cluster jujucharms 266 ubuntu
neutron-api 12.0.2 active 1 neutron-api jujucharms 260 ubuntu
neutron-gateway 12.0.2 active 1 neutron-gateway jujucharms 252 ubuntu
neutron-openvswitch 12.0.2 active 2 neutron-openvswitch jujucharms 250 ubuntu
nova-cloud-controller 17.0.4 active 1 nova-cloud-controller jujucharms 310 ubuntu
nova-compute-kvm 17.0.4 error 1 nova-compute jujucharms 284 ubuntu
nova-compute-lxd 17.0.4 active 1 nova-compute jujucharms 284 ubuntu
openstack-dashboard 13.0.0 active 1 openstack-dashboard jujucharms 259 ubuntu
rabbitmq-server 3.6.10 active 1 rabbitmq-server jujucharms 74 ubuntu

Unit Workload Agent Machine Public address Ports Message
ceilometer/0* waiting idle 0 10.89.13.92 Incomplete relations: database
ceph-mon/0* active idle 1 10.89.13.114 Unit is ready and clustered
ceph-mon/1 active idle 2 10.89.13.235 Unit is ready and clustered
ceph-mon/2 active idle 3 10.89.13.26 Unit is ready and clustered
ceph-osd/0 error idle 4 10.89.13.66 hook failed: "install"
ceph-osd/1 error idle 5 10.89.13.19 hook failed: "install"
ceph-osd/2* error idle 6 10.89.13.103 hook failed: "install"
ceph-osd/3 error idle 7 10.89.13.93 hook failed: "install"
ceph-osd/4 error idle 8 10.89.13.91 hook failed: "install"
ceph-osd/5 error idle 9 10.89.13.131 hook failed: "install"
ceph-radosgw/0* active idle 10 10.89.13.143 80/tcp Unit is ready
cinder/0* active idle 11 10.89.13.221 8776/tcp Unit is ready
cinder-ceph/0* active idle 10.89.13.221 Unit is ready
designate-bind/0* active idle 13 10.89.13.246 Unit is ready
designate/0* active idle 12 10.89.13.17 9001/tcp Unit is ready
glance/0* active idle 14 10.89.13.75 9292/tcp Unit is ready
gnocchi/0* error idle 15 10.89.13.208 8041/tcp hook failed: "identity-service-relation-changed"
heat/0* active idle 16 10.89.13.112 8000/tcp,8004/tcp Unit is ready
keystone/0* active idle 17 10.89.13.162 5000/tcp Unit is ready
memcached/0* unknown idle 18 10.89.13.101 11211/tcp
mysql/0* active idle 19 10.89.13.87 3306/tcp Unit is ready
neutron-api/0* active idle 20 10.89.13.120 9696/tcp Unit is ready
neutron-gateway/0* active idle 21 10.89.13.210 Unit is ready
nova-cloud-controller/0* active idle 22 10.89.13.182 8774/tcp,8778/tcp Unit is ready
nova-compute-kvm/0* error idle 23 10.89.13.21 hook failed: "ceph-relation-changed"
ceilometer-agent/1 active idle 10.89.13.21 Unit is ready
neutron-openvswitch/1 active idle 10.89.13.21 Unit is ready
nova-compute-lxd/0* active idle 24 10.89.13.144 Unit is ready
ceilometer-agent/0* active idle 10.89.13.144 Unit is ready
lxd/0* active idle 10.89.13.144 Unit is ready
neutron-openvswitch/0* active idle 10.89.13.144 Unit is ready
openstack-dashboard/0* active idle 25 10.89.13.148 80/tcp,443/tcp Unit is ready
rabbitmq-server/0* active idle 26 10.89.13.251 5672/tcp Unit is ready

Machine State DNS Inst id Series AZ Message
0 started 10.89.13.92 juju-b5b31b-0 bionic Running
1 started 10.89.13.114 juju-b5b31b-1 bionic Running
2 started 10.89.13.235 juju-b5b31b-2 bionic Running
3 started 10.89.13.26 juju-b5b31b-3 bionic Running
4 started 10.89.13.66 juju-b5b31b-4 bionic Running
5 started 10.89.13.19 juju-b5b31b-5 bionic Running
6 started 10.89.13.103 juju-b5b31b-6 bionic Running
7 started 10.89.13.93 juju-b5b31b-7 bionic Running
8 started 10.89.13.91 juju-b5b31b-8 bionic Running
9 started 10.89.13.131 juju-b5b31b-9 bionic Running
10 started 10.89.13.143 juju-b5b31b-10 bionic Running
11 started 10.89.13.221 juju-b5b31b-11 bionic Running
12 started 10.89.13.17 juju-b5b31b-12 bionic Running
13 started 10.89.13.246 juju-b5b31b-13 bionic Running
14 started 10.89.13.75 juju-b5b31b-14 bionic Running
15 started 10.89.13.208 juju-b5b31b-15 bionic Running
16 started 10.89.13.112 juju-b5b31b-16 bionic Running
17 started 10.89.13.162 juju-b5b31b-17 bionic Running
18 started 10.89.13.101 juju-b5b31b-18 bionic Running
19 started 10.89.13.87 juju-b5b31b-19 bionic Running
20 started 10.89.13.120 juju-b5b31b-20 bionic Running
21 started 10.89.13.210 juju-b5b31b-21 bionic Running
22 started 10.89.13.182 juju-b5b31b-22 bionic Running
23 started 10.89.13.21 juju-b5b31b-23 bionic Running
24 started 10.89.13.144 juju-b5b31b-24 bionic Running
25 started 10.89.13.148 juju-b5b31b-25 bionic Running
26 started 10.89.13.251 juju-b5b31b-26 bionic Running

@glzavert
Copy link
Author

logs from first osd

machine-4.log
unit-ceph-osd-0.log

@sfeole
Copy link
Contributor

sfeole commented Jun 21, 2018

Hey There, Looks like you're hitting bug: https://bugs.launchpad.net/charm-ceph-osd/+bug/1776713

Which actually looks more like a bug with LXD.

Github Bug: https://github.com/lxc/lxd/issues/4673

I'm guessing if you're using Bionic then its LXD 3.0 , you may want to try using LXD 3.1 in the snap store.

$sudo snap install --stable lxd

This is a temporary workaround for now.

@glzavert
Copy link
Author

glzavert commented Jun 21, 2018

So I am not sure how this upgrade to LXD works in Ubuntu. From what I read, 3.1 is a "feature" update? Also, I think I read that feature updates will not be added into the 18.04 LTS repos? How and when will updates to LXD be added back into the main 18.04 repos? Rather than going down the path of installing and updating LXD with snap, I guess I can roll back the ceph-osd version to v261 instead? Scratch that, just looked in the charm store and only the current version is available?

@sfeole
Copy link
Contributor

sfeole commented Jun 21, 2018

I ran into this problem yesterday for the first time and filed the bugs to get started with triage, but after spending more time on it and speaking with the lxd folks they tend to believe it could be some sort of race condition, (specifically with charm -262). So I'm still running through some tests and looking at it here also.

From what I can gather udevadm burps when trying to reload a single rule that's imported via the charm, related to juju. But I'm not able to reproduce it when run manually. So it may be more charm related after finding that piece.

As you have discovered running the older version of the charm appears to resolve the issue also, just as I had found upgrading to lxd 3.1. But if it's a race that may explain why it worked with the upgraded version of lxd. Then again, I have NOT been able to get this working on a straight up fresh install of Bionic on any host.

You can tell your bundle to install v261, simply by changing charm:cs:ceph-osd -> charm: cs:ceph-osd-261. Granted by doing so you may introduce new problems, even though you have validated it installs, there may not be particular queens related features introduced in 262 may not work. You can probably sort through the commit log to figure that out. if it's a problem.

I'll let you know and update this bug if i find anything useful.

ta

@glzavert
Copy link
Author

So, I am not getting anywhere. I went back and removed juju and lxd. destroyed the controller etc. So after I had an updated base 18.04 image without lxd and juju I installed lxd and juju via snap. Couldn't install juju using the deb package as it wanted to reinstall lxd 3.0.1 on top of 3.1. I completely reconfig'd lxd and bootstrapped juju. All appeared good, lxd at 3.1 and juju at 2.3.8-bionic-amd64. Now when I run the juju deploy everything just hangs and stays that way. I have done this 4 times with the same results.

udeadmin@udedemos:~/openstack-on-lxd$ juju status
Model Controller Cloud/Region Version SLA
default lxd localhost/localhost 2.3.8 unsupported

App Version Status Scale Charm Store Rev OS Notes
ceilometer waiting 0/1 ceilometer jujucharms 253 ubuntu
ceilometer-agent waiting 0 ceilometer-agent jujucharms 244 ubuntu
ceph-mon waiting 0/3 ceph-mon jujucharms 25 ubuntu
ceph-osd waiting 0/6 ceph-osd jujucharms 262 ubuntu
ceph-radosgw waiting 0/1 ceph-radosgw jujucharms 258 ubuntu
cinder waiting 0/1 cinder jujucharms 272 ubuntu
cinder-ceph waiting 0 cinder-ceph jujucharms 233 ubuntu
designate waiting 0/1 designate jujucharms 19 ubuntu
designate-bind waiting 0/1 designate-bind jujucharms 13 ubuntu
glance waiting 0/1 glance jujucharms 265 ubuntu
gnocchi waiting 0/1 gnocchi jujucharms 8 ubuntu
heat waiting 0/1 heat jujucharms 252 ubuntu
keystone waiting 0/1 keystone jujucharms 281 ubuntu
lxd waiting 0 lxd jujucharms 18 ubuntu
memcached waiting 0/1 memcached jujucharms 21 ubuntu
mysql waiting 0/1 percona-cluster jujucharms 266 ubuntu
neutron-api waiting 0/1 neutron-api jujucharms 260 ubuntu
neutron-gateway waiting 0/1 neutron-gateway jujucharms 252 ubuntu
neutron-openvswitch waiting 0 neutron-openvswitch jujucharms 250 ubuntu
nova-cloud-controller waiting 0/1 nova-cloud-controller jujucharms 310 ubuntu
nova-compute-kvm waiting 0/1 nova-compute jujucharms 284 ubuntu
nova-compute-lxd waiting 0/1 nova-compute jujucharms 284 ubuntu
openstack-dashboard waiting 0/1 openstack-dashboard jujucharms 259 ubuntu
rabbitmq-server waiting 0/1 rabbitmq-server jujucharms 74 ubuntu

Unit Workload Agent Machine Public address Ports Message
ceilometer/0 waiting allocating 0 10.41.89.179 waiting for machine
ceph-mon/0 waiting allocating 1 10.41.89.56 waiting for machine
ceph-mon/1 waiting allocating 2 10.41.89.195 waiting for machine
ceph-mon/2 waiting allocating 3 waiting for machine
ceph-osd/0 waiting allocating 4 waiting for machine
ceph-osd/1 waiting allocating 5 waiting for machine
ceph-osd/2 waiting allocating 6 waiting for machine
ceph-osd/3 waiting allocating 7 waiting for machine
ceph-osd/4 waiting allocating 8 waiting for machine
ceph-osd/5 waiting allocating 9 waiting for machine
ceph-radosgw/0 waiting allocating 10 waiting for machine
cinder/0 waiting allocating 11 waiting for machine
designate-bind/0 waiting allocating 13 waiting for machine
designate/0 waiting allocating 12 waiting for machine
glance/0 waiting allocating 14 waiting for machine
gnocchi/0 waiting allocating 15 waiting for machine
heat/0 waiting allocating 16 waiting for machine
keystone/0 waiting allocating 17 waiting for machine
memcached/0 waiting allocating 18 waiting for machine
mysql/0 waiting allocating 19 waiting for machine
neutron-api/0 waiting allocating 20 waiting for machine
neutron-gateway/0 waiting allocating 21 waiting for machine
nova-cloud-controller/0 waiting allocating 22 waiting for machine
nova-compute-kvm/0 waiting allocating 23 waiting for machine
nova-compute-lxd/0 waiting allocating 24 waiting for machine
openstack-dashboard/0 waiting allocating 25 waiting for machine
rabbitmq-server/0 waiting allocating 26 waiting for machine

Machine State DNS Inst id Series AZ Message
0 pending 10.41.89.179 juju-51cc48-0 bionic Running
1 pending 10.41.89.56 juju-51cc48-1 bionic Running
2 pending 10.41.89.195 juju-51cc48-2 bionic Running
3 pending pending bionic preparing image
4 pending pending bionic preparing image
5 pending pending bionic preparing image
6 pending pending bionic preparing image
7 pending pending bionic preparing image
8 pending juju-51cc48-8 bionic container started
9 pending pending bionic preparing image
10 pending juju-51cc48-10 bionic container started
11 pending juju-51cc48-11 bionic container started
12 pending pending bionic preparing image
13 pending juju-51cc48-13 bionic container started
14 pending pending bionic preparing image
15 pending juju-51cc48-15 bionic container started
16 pending pending bionic preparing image
17 pending pending bionic preparing image
18 pending pending bionic preparing image
19 pending pending bionic preparing image
20 pending juju-51cc48-20 bionic container started
21 pending pending bionic preparing image
22 pending pending bionic preparing image
23 pending pending bionic preparing image
24 pending pending bionic preparing image
25 pending pending bionic preparing image
26 pending pending bionic preparing image

@sfeole
Copy link
Contributor

sfeole commented Jun 22, 2018

@glzavert did you run through the sysctl settings described in: https://docs.openstack.org/charm-guide/latest/openstack-on-lxd.html

I have been unable to reproduce this last night and today. Ceph-osd now successfully installs, configures and sits idle in the ready state. Vanilla Bionic installs, using the bundle versions of lxd (3.0) and juju 2.3.8-bionic-arm64.

A bit frustrated as now I can't find steps to reproduce this reliably. Will update if I get any more valuable info.

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04 LTS
Release: 18.04
Codename: bionic

ubuntu@hotdog:~/openstack-on-lxd$ uname -a
Linux hotdog 4.15.0-23-generic #25-Ubuntu SMP Wed May 23 17:59:52 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux

$ lxc --version
3.0.0

Model Controller Cloud/Region Version SLA
default localhost-localhost localhost/localhost 2.3.8 unsupported

App Version Status Scale Charm Store Rev OS Notes
ceilometer 10.0.0 waiting 1 ceilometer jujucharms 253 ubuntu
ceilometer-agent waiting 0 ceilometer-agent jujucharms 244 ubuntu
ceph-mon 12.2.4 active 3 ceph-mon jujucharms 25 ubuntu
ceph-osd 12.2.4 active 3 ceph-osd jujucharms 262 ubuntu
ceph-radosgw 12.2.4 active 1 ceph-radosgw jujucharms 258 ubuntu

Unit Workload Agent Machine Public address Ports Message
ceilometer/0* waiting idle 0 10.111.158.119 Incomplete relations: messaging
ceph-mon/0* active idle 1 10.111.158.183 Unit is ready and clustered
ceph-mon/1 active idle 2 10.111.158.173 Unit is ready and clustered
ceph-mon/2 active idle 3 10.111.158.188 Unit is ready and clustered
ceph-osd/0* active idle 4 10.111.158.134 Unit is ready (1 OSD)
ceph-osd/1 active idle 5 10.111.158.33 Unit is ready (1 OSD)
ceph-osd/2 active idle 6 10.111.158.213 Unit is ready (1 OSD)
ceph-radosgw/0* active idle 7 10.111.158.43 80/tcp Unit is ready

@glzavert
Copy link
Author

@sfeole Has anyone figured this out yet? I just had to rip and replace my environment because Horizon began to error every time I tried to edit metadata with "unable to retrieve the namespaces" errors. This left me with only the CLI for doing anything.

I did a complete redeploy and if I use charm: cs:ceph-osd-262 it errors every time. If I use charm: cs:ceph-osd-261 I can get it to install.

@glzavert
Copy link
Author

I have to say I am more than a little frustrated here. I continue to be unable to consistently get a functioning openstack deployment. Even by following the basic Openstack-on-lxd instructions. I will try to explain what I am doing, what I get, and upload applicable files to the best of my ability. I HOPE SOMEONE WILL ACTUALLY LOOK AT THIS AND HELP!

So, I have an Ubuntu 18.04 VM that is up to date with 32G memory and 8 cores. I have setup and fully tested LXD with a storage pool that uses a 3.4TB partitioned, unmounted, and unformatted block device, i.e. /dev/sdb1 for the ZFS storage pool, The host OS is a a separate 500G device. I have used both LXD v3.01 and v3.2 with the exact same results.

If I use Juju to deploy bundle-bionic-queens.yaml or any variant of the yaml without specifying the version of the charms, the entire deployment hangs in the middle somewhere and I see kernel messages about hung requests with 120 second timeouts. Again, this happens every time, regardless of using a new VM with a clean 18.04 install or existing VM that has worked before.

Since this is bionic I used "default-series: bionic" in the config.yaml, although I have also used xenial with the same results. Also, since this is bionic, I have to use "ppa:openstack-ubuntu-testing/queens" for the repository.

I have modified deployment files in order to create the model and relationships I want as follows.
(6) ceph-osd instances
(2) independent compute nodes, compute-lxd and compute-kvm
(1) added the lxd charm

I also changed ceph to use bluestore and changed the replication factor from 3 to 1 in all charms as an optional argument, except gnocchi where it is not available in the charm though should be. The reason for this is that after I get ceph working, I plan to use erasure coding and a caching tier to maximize storage resources.

To date the only way to get the deployment to complete is to set the charm versions in at least the ceph charms. Then manually have juju upgrade the charms. The results are the same, upgraded or not.

If I use my bundle-bionic-queens-kvm-lxd2.yaml deployment I get the following:

udeadmin@ude:~/openstack-on-lxd$ juju status
Model Controller Cloud/Region Version SLA Timestamp
default lxd localhost/localhost 2.4.1 unsupported 17:56:51Z

App Version Status Scale Charm Store Rev OS Notes
ceilometer 10.0.1 waiting 1 ceilometer jujucharms 253 ubuntu
ceilometer-agent 10.0.1 active 2 ceilometer-agent jujucharms 244 ubuntu
ceph-mon 12.2.4 active 3 ceph-mon jujucharms 25 ubuntu
ceph-osd 12.2.4 error 6 ceph-osd jujucharms 266 ubuntu
ceph-radosgw 12.2.4 active 1 ceph-radosgw jujucharms 258 ubuntu
cinder 12.0.3 active 1 cinder jujucharms 272 ubuntu
cinder-ceph 12.0.3 active 1 cinder-ceph jujucharms 233 ubuntu
designate 6.0.1 active 1 designate jujucharms 19 ubuntu
designate-bind 9.11.3+dfsg active 1 designate-bind jujucharms 13 ubuntu
glance 16.0.1 active 1 glance jujucharms 265 ubuntu
gnocchi 4.2.4 error 1 gnocchi jujucharms 10 ubuntu
heat 10.0.1 active 1 heat jujucharms 253 ubuntu
keystone 13.0.0 active 1 keystone jujucharms 282 ubuntu
lxd 3.0.1 active 1 lxd jujucharms 18 ubuntu
memcached unknown 1 memcached jujucharms 21 ubuntu
mysql 5.7.20-29.24 active 1 percona-cluster jujucharms 266 ubuntu
neutron-api 12.0.3 active 1 neutron-api jujucharms 260 ubuntu
neutron-gateway 12.0.3 active 1 neutron-gateway jujucharms 252 ubuntu
neutron-openvswitch 12.0.3 active 2 neutron-openvswitch jujucharms 250 ubuntu
nova-cloud-controller 17.0.5 active 1 nova-cloud-controller jujucharms 310 ubuntu
nova-compute-kvm 17.0.5 active 1 nova-compute jujucharms 284 ubuntu
nova-compute-lxd 17.0.5 active 1 nova-compute jujucharms 284 ubuntu
openstack-dashboard 13.0.1 active 1 openstack-dashboard jujucharms 259 ubuntu
rabbitmq-server 3.6.10 active 1 rabbitmq-server jujucharms 74 ubuntu

Unit Workload Agent Machine Public address Ports Message
ceilometer/0* waiting idle 0 172.27.1.34 Incomplete relations: database
ceph-mon/0* active idle 1 172.27.1.133 Unit is ready and clustered
ceph-mon/1 active idle 2 172.27.1.135 Unit is ready and clustered
ceph-mon/2 active idle 3 172.27.1.180 Unit is ready and clustered
ceph-osd/0 error idle 4 172.27.1.12 hook failed: "mon-relation-changed"
ceph-osd/1 error idle 5 172.27.1.74 hook failed: "mon-relation-changed"
ceph-osd/2 error idle 6 172.27.1.182 hook failed: "mon-relation-changed"
ceph-osd/3* error idle 7 172.27.1.87 hook failed: "mon-relation-changed"
ceph-osd/4 error idle 8 172.27.1.179 hook failed: "mon-relation-changed"
ceph-osd/5 error idle 9 172.27.1.24 hook failed: "mon-relation-changed"
ceph-radosgw/0* active idle 10 172.27.1.183 80/tcp Unit is ready
cinder/0* active idle 11 172.27.1.35 8776/tcp Unit is ready
cinder-ceph/0* active idle 172.27.1.35 Unit is ready
designate-bind/0* active idle 13 172.27.1.246 Unit is ready
designate/0* active idle 12 172.27.1.121 9001/tcp Unit is ready
glance/0* active idle 14 172.27.1.61 9292/tcp Unit is ready
gnocchi/0* error idle 15 172.27.1.86 8041/tcp hook failed: "identity-service-relation-changed"
heat/0* active idle 16 172.27.1.20 8000/tcp,8004/tcp Unit is ready
keystone/0* active idle 17 172.27.1.240 5000/tcp Unit is ready
memcached/0* unknown idle 18 172.27.1.106 11211/tcp
mysql/0* active idle 19 172.27.1.146 3306/tcp Unit is ready
neutron-api/0* active idle 20 172.27.1.221 9696/tcp Unit is ready
neutron-gateway/0* active idle 21 172.27.1.42 Unit is ready
nova-cloud-controller/0* active idle 22 172.27.1.53 8774/tcp,8778/tcp Unit is ready
nova-compute-kvm/0* active idle 23 172.27.1.196 Unit is ready
ceilometer-agent/1 active idle 172.27.1.196 Unit is ready
neutron-openvswitch/1 active idle 172.27.1.196 Unit is ready
nova-compute-lxd/0* active idle 24 172.27.1.22 Unit is ready
ceilometer-agent/0* active idle 172.27.1.22 Unit is ready
lxd/0* active idle 172.27.1.22 Unit is ready
neutron-openvswitch/0* active idle 172.27.1.22 Unit is ready
openstack-dashboard/0* active idle 25 172.27.1.55 80/tcp,443/tcp Unit is ready
rabbitmq-server/0* active idle 26 172.27.1.65 5672/tcp Unit is ready

Machine State DNS Inst id Series AZ Message
0 started 172.27.1.34 juju-fc635b-0 bionic Running
1 started 172.27.1.133 juju-fc635b-1 bionic Running
2 started 172.27.1.135 juju-fc635b-2 bionic Running
3 started 172.27.1.180 juju-fc635b-3 bionic Running
4 started 172.27.1.12 juju-fc635b-4 bionic Running
5 started 172.27.1.74 juju-fc635b-5 bionic Running
6 started 172.27.1.182 juju-fc635b-6 bionic Running
7 started 172.27.1.87 juju-fc635b-7 bionic Running
8 started 172.27.1.179 juju-fc635b-8 bionic Running
9 started 172.27.1.24 juju-fc635b-9 bionic Running
10 started 172.27.1.183 juju-fc635b-10 bionic Running
11 started 172.27.1.35 juju-fc635b-11 bionic Running
12 started 172.27.1.121 juju-fc635b-12 bionic Running
13 started 172.27.1.246 juju-fc635b-13 bionic Running
14 started 172.27.1.61 juju-fc635b-14 bionic Running
15 started 172.27.1.86 juju-fc635b-15 bionic Running
16 started 172.27.1.20 juju-fc635b-16 bionic Running
17 started 172.27.1.240 juju-fc635b-17 bionic Running
18 started 172.27.1.106 juju-fc635b-18 bionic Running
19 started 172.27.1.146 juju-fc635b-19 bionic Running
20 started 172.27.1.221 juju-fc635b-20 bionic Running
21 started 172.27.1.42 juju-fc635b-21 bionic Running
22 started 172.27.1.53 juju-fc635b-22 bionic Running
23 started 172.27.1.196 juju-fc635b-23 bionic Running
24 started 172.27.1.22 juju-fc635b-24 bionic Running
25 started 172.27.1.55 juju-fc635b-25 bionic Running
26 started 172.27.1.65 juju-fc635b-26 bionic Running

bundle-bionic-queens-kvm-lxd2.yaml.txt

I have ran the bundle-bionic-queens-kvm-lxd.yaml in the past and had everything run cleanly but now it does the same and the bundle-bionic-queens-kvm-lxd2.yaml

bundle-bionic-queens-kvm-lxd.yaml.txt

My hope is that someone is actually maintaining this and can help identify the issue and how to fix it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants