Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong value for inodeSize in Volume status xml output. #2936

Closed
aravindavk opened this issue Nov 7, 2021 · 0 comments · Fixed by #2937
Closed

Wrong value for inodeSize in Volume status xml output. #2936

aravindavk opened this issue Nov 7, 2021 · 0 comments · Fixed by #2937

Comments

@aravindavk
Copy link
Member

Wrong value for inodeSize in Volume status xml output.

<inodeSize>btrfs</inodeSize>

Steps to reproduce: Run Volume status xml command gluster vol status test-volume detail --xml

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volStatus>
    <volumes>
      <volume>
        <volName>test-volume</volName>
        <nodeCount>2</nodeCount>
        <node>
          <hostname>glusterfs1</hostname>
          <path>/gluster/test</path>
          <peerid>8e119499-ab8d-4715-bace-2f16bfe23293</peerid>
          <status>1</status>
          <port>49159</port>
          <ports>
            <tcp>49159</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>6014</pid>
          <sizeTotal>869729808384</sizeTotal>
          <sizeFree>703918551040</sizeFree>
          <device>/dev/sdb1</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,relatime,compress-force=lzo,space_cache,subvolid=5,subvol=/</mntOptions>
          <fsName>btrfs</fsName>
          <inodeSize>btrfs</inodeSize>
        </node>
        <node>
          <hostname>glusterfs2</hostname>
          <path>/gluster/test</path>
          <peerid>8f5ef325-5a77-473b-8d5f-b2258440ac58</peerid>
          <status>1</status>
          <port>49159</port>
          <ports>
            <tcp>49159</tcp>
            <rdma>N/A</rdma>
          </ports>
          <pid>108320</pid>
          <sizeTotal>869729808384</sizeTotal>
          <sizeFree>714670706688</sizeFree>
          <device>/dev/sdb1</device>
          <blockSize>4096</blockSize>
          <mntOptions>rw,relatime,compress-force=lzo,space_cache,subvolid=5,subvol=/</mntOptions>
          <fsName>btrfs</fsName>
          <inodeSize>btrfs</inodeSize>
        </node>
      </volume>
    </volumes>
  </volStatus>
</cliOutput>

Issue first discovered here

GlusterFS version: 9.3

aravindavk added a commit to aravindavk/glusterfs that referenced this issue Nov 7, 2021
Fs type was added to inodeSize xml output. Now it is fixed
by adding the actual variable.

Fixes: gluster#2936
Change-Id: Iaa285cb2a7dbf77d7a2cf1e2636e4285d2257124
Signed-off-by: Aravinda Vishwanathapura <[email protected]>
xhernandez pushed a commit that referenced this issue Nov 12, 2021
Fs type was added to inodeSize xml output. Now it is fixed
by adding the actual variable.

Fixes: #2936
Change-Id: Iaa285cb2a7dbf77d7a2cf1e2636e4285d2257124
Signed-off-by: Aravinda Vishwanathapura <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant