Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error when trying to use all free space for DATA #169

Open
dustymabe opened this issue Nov 10, 2016 · 8 comments
Open

error when trying to use all free space for DATA #169

dustymabe opened this issue Nov 10, 2016 · 8 comments

Comments

@dustymabe
Copy link
Contributor

On Fedora 25 atomic host I'm able to get errors from docker storage setup if I specify to use all free space in the VG.

-bash-4.3# rpm-ostree status
State: idle
Deployments:
● fedora-atomic:fedora-atomic/25/x86_64/docker-host
       Version: 25.40 (2016-11-08 10:20:02)
        Commit: b02be04acde2a04a96d9e969edf3b73753967c06811fc51d65c2b2b4985dff4a
        OSName: fedora-atomic
-bash-4.3# rpm -q docker
docker-1.12.2-5.git8f1975c.fc25.x86_64
-bash-4.3# cat /etc/sysconfig/docker-storage-setup 
GROWPART=true
DATA_SIZE=100%FREE
-bash-4.3# journalctl -u docker-storage-setup.service 
-- Logs begin at Thu 2016-11-10 21:45:09 UTC, end at Thu 2016-11-10 21:52:42 UTC. --
Nov 10 21:45:15 cloudhost.localdomain systemd[1]: Starting Docker Storage Setup...
Nov 10 21:45:15 cloudhost.localdomain docker-storage-setup[966]: CHANGED: partition=2 start=616448 old: size=11966464 end=12582912 new: size=20355072,end=20971520
Nov 10 21:45:15 cloudhost.localdomain docker-storage-setup[966]:   Physical volume "/dev/sda2" changed
Nov 10 21:45:15 cloudhost.localdomain docker-storage-setup[966]:   1 physical volume(s) resized / 0 physical volume(s) not resized
Nov 10 21:45:15 cloudhost.localdomain docker-storage-setup[966]:   WARNING: D-Bus notification failed: The name com.redhat.lvmdbus1 was not provided by any .service files
Nov 10 21:45:15 cloudhost.localdomain docker-storage-setup[966]:   Rounding up size to full physical extent 12.00 MiB
Nov 10 21:45:16 cloudhost.localdomain docker-storage-setup[966]:   Logical volume "docker-poolmeta" created.
Nov 10 21:45:16 cloudhost.localdomain docker-storage-setup[966]:   WARNING: D-Bus notification failed: The name com.redhat.lvmdbus1 was not provided by any .service files
Nov 10 21:45:16 cloudhost.localdomain docker-storage-setup[966]:   Logical volume "docker-pool" created.
Nov 10 21:45:16 cloudhost.localdomain docker-storage-setup[966]:   WARNING: D-Bus notification failed: The name com.redhat.lvmdbus1 was not provided by any .service files
Nov 10 21:45:16 cloudhost.localdomain docker-storage-setup[966]:   WARNING: Converting logical volume atomicos/docker-pool and atomicos/docker-poolmeta to pool's data and metadata volumes.
Nov 10 21:45:16 cloudhost.localdomain docker-storage-setup[966]:   THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Nov 10 21:45:16 cloudhost.localdomain docker-storage-setup[966]:   Volume group "atomicos" has insufficient free space (0 extents): 3 required.
Nov 10 21:45:16 cloudhost.localdomain systemd[1]: docker-storage-setup.service: Main process exited, code=exited, status=5/NOTINSTALLED
Nov 10 21:45:16 cloudhost.localdomain systemd[1]: Failed to start Docker Storage Setup.
Nov 10 21:45:16 cloudhost.localdomain systemd[1]: docker-storage-setup.service: Unit entered failed state.
Nov 10 21:45:16 cloudhost.localdomain systemd[1]: docker-storage-setup.service: Failed with result 'exit-code'.

I end up with a half set up system:

-bash-4.3# lsblk
NAME                        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
loop1                         7:1    0    2G  0 loop 
└─docker-253:0-4431057-pool 253:1    0  100G  0 dm   
sdb                           8:16   0   10G  0 disk 
loop0                         7:0    0  100G  0 loop 
└─docker-253:0-4431057-pool 253:1    0  100G  0 dm   
sdc                           8:32   0  366K  0 disk 
sda                           8:0    0   10G  0 disk 
├─sda2                        8:2    0  9.7G  0 part 
│ └─atomicos-root           253:0    0    3G  0 lvm  /sysroot
└─sda1                        8:1    0  300M  0 part /boot
-bash-4.3# lvs
  LV              VG       Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  docker-pool     atomicos -wi-------  6.76g                                                    
  docker-poolmeta atomicos -wi------- 12.00m                                                    
  root            atomicos -wi-ao----  2.93g
@rhvgoyal
Copy link
Collaborator

Hmm.., looks like lvconvert failed. And I suspect that it failed because it requires some free space (spare space) for thin pool. And as volume group is already full consumed, there is no free space, so lvconvert fails.

So short answer, is no, 100% data space will not work. Try 90% and that should work.

@rhvgoyal
Copy link
Collaborator

For more information do "man lvmthin" and look at section Spare metadata LV

@rhatdan
Copy link
Member

rhatdan commented Nov 12, 2016

Should d-s-s fail if the user specified 100%. And do we need to update docs.

@dustymabe
Copy link
Contributor Author

TBH I would think the underlying lvm tools could accept 100%FREE and be smart enough to figure out what to do. If it is creating the metadata LV and thin pool LV at the same time then it could easily adjust and make it so that size(metadata_lv) + size(thinpool_lv) == 100%FREE

@rhatdan
Copy link
Member

rhatdan commented Nov 13, 2016

That works also if we can figure this out. But the bottom line is we don't let the user end up in the state that you got into.

@rhvgoyal
Copy link
Collaborator

problem is lvm determines internally size of spare lv at lvconvert time. So it does not tell us in advance what should be size of spare logical volume.

If we don't want lvm internal magic, then we could think of creating spare logical volume ourselves and pass it to lvm during lvm convert time (The way we do for metadata lv).

So we will first create metadata lv, then spare lv and then rest of the free space is used for data lv and then we call lvconvert.

Issue here is what should be the size of spare lv. It is not clear to me. I will check with lvm folks and see if they have guidelines on how to select size of spare lv.

@rhvgoyal
Copy link
Collaborator

May be there is no way to pass in spare volume to lvconvert. Atleast I don't get that sense from reading man lvmthin

@dustymabe
Copy link
Contributor Author

After a long discussion on IRC with zkabelac and vgoyal we now understand the problem more clearly:

In d-s-s we are creating the metalv and datalv separately and then converting them into a thinpool. Something like this from the lvthin man page:

       # lvcreate -n pool0 -L 10G vg
       # lvcreate -n pool0meta -L 1G vg
       # lvconvert --type thin-pool --poolmetadata vg/pool0meta vg/pool0

The problem is that the lvconvert operation tries to create a small spare metadata LV which will fail if you specify 100%FREE to one of the earlier operations. This is not an issue if you just create the thin pool directly with lvcreate rather than trying to set it all up manually.

@rhvgoyal and I propose that we start just passing in the size directly to lvcreate --thin and have LVM manage the metadata size and the interpretation of the -l 100%FREE.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants