Loading ...
Sorry, an error occurred while loading the content.

Re: Solaris 11.1 kstat cpu_info reports clock speed at half value

Expand Messages
  • pcsol1996
    ... Thanks for all the replies, folks.
    Message 1 of 13 , Jan 2, 2013
    • 0 Attachment
      --- In solarisx86@yahoogroups.com, Peter Schow <pschow@...> wrote:
      >
      > On Fri, Dec 28, 2012 at 09:17:21PM -0000, pcsol1996 wrote:
      > > Do I have a hardware/config issue with the X3-2 or is this just some sort
      > > of weirdness with Solaris and psrinfo? I also checked my Ultra 40 running
      > > Solaris 11.1 and it reports the 3Ghz AMD CPUs at 3000Mhz.
      >
      > If you want to verify your CPU speeds, you can run smbios, a la:
      >
      > smbios -t 4
      >
      > which should tell you current and max.
      >
      Thanks for all the replies, folks.
    • Marc Lobelle
      Dear all, First my best wishes for 2013! I just noticed a surprising disk space leak on my solaris 11 notebook (A HP Elitebook 2540 with a SSD drive) The drive
      Message 2 of 13 , Jan 7, 2013
      • 0 Attachment
        Dear all,

        First my best wishes for 2013!

        I just noticed a surprising disk space leak on my solaris 11 notebook (A HP Elitebook 2540 with a
        SSD drive)
        The drive is partitiones in a Widows 7 partition of about 60 Gig and a solaris partition of about
        150 Gig. When I tried to update solaris 11, as I am regularly proposed to do, it could not do it
        because there was not enough disk space. So I freed all I could miss and nowthere is 9 free gigs so
        that I shalle be able to do the update However if I do df -k, I read;
        -[2]-~# df -k
        Système de fichiers blocs de 1024 Utilisé Disponible Capacité Monté sur
        rpool/ROOT/solaris-4 143474688 15044076 9966849 61% /
        /devices 0 0 0 0% /devices
        /dev 0 0 0 0% /dev
        ctfs 0 0 0 0% /system/contract
        proc 0 0 0 0% /proc
        mnttab 0 0 0 0% /etc/mnttab
        swap 3864148 692 3863456 1% /system/volatile
        objfs 0 0 0 0% /system/object
        sharefs 0 0 0 0% /etc/dfs/sharetab
        /usr/lib/libc/libc_hwcap1.so.1
        25010925 15044076 9966849 61% /lib/libc.so.1
        fd 0 0 0 0% /dev/fd
        swap 3863508 52 3863456 1% /tmp
        rpool/export 143474688 32 9966849 1% /export
        rpool/export/home 143474688 32 9966849 1% /export/home
        rpool/export/home/ml 143474688 14011760 9966849 59% /export/home/ml
        rpool 143474688 98 9966849 1% /rpool
        /export/home/ml 23978609 14011760 9966849 59% /home/ml

        If I understand correctly the disk has 143 gigs, of which my personal directories use 14 Gigs and
        the system 15 gigs

        Where are the missing 110 gigs ??

        Does anybody understand this ?

        Thanks

        Marc
      • Laurent Blume
        ... Yes! Happy new year to all! ... [snip] ... Mostly, with zfs, df has become largely irrelevant in estimating disk space use. What you need to do is «beadm
        Message 3 of 13 , Jan 7, 2013
        • 0 Attachment
          On 01/07/13 12:19, Marc Lobelle wrote:
          > Dear all,
          >
          > First my best wishes for 2013!

          Yes! Happy new year to all!

          > I just noticed a surprising disk space leak on my solaris 11 notebook (A HP Elitebook 2540 with a
          > SSD drive)
          > The drive is partitiones in a Widows 7 partition of about 60 Gig and a solaris partition of about
          > 150 Gig. When I tried to update solaris 11, as I am regularly proposed to do, it could not do it
          > because there was not enough disk space. So I freed all I could miss and nowthere is 9 free gigs so
          > that I shalle be able to do the update However if I do df -k, I read;
          > -[2]-~# df -k

          [snip]

          > If I understand correctly the disk has 143 gigs, of which my personal directories use 14 Gigs and
          > the system 15 gigs
          >
          > Where are the missing 110 gigs ??
          >
          > Does anybody understand this ?

          Mostly, with zfs, df has become largely irrelevant in estimating disk
          space use.

          What you need to do is «beadm list» and »«zfs list -t all -r rpool».
          The first will show you if there are old BE's that are not needed
          anymore (the space they actually use is always bigger than what the
          command says). The second will show you all the datasets on rpool,
          mounted or not, and their snapshots.
          There are probably some of those that you can destroy.

          Hope this helps,

          Laurent
        • Laurent Blume
          ... The usual rule of thumb is to keep at least one working previous environment, just in case. So you can remove the older ones, yes. Also note that when you
          Message 4 of 13 , Jan 9, 2013
          • 0 Attachment
            On 08/01/13 00:06, Marc Lobelle wrote:
            > there are 5:
            > solaris: 91.78 M dated 2010-12-08
            > solaris-1 : 54,69 M dated 2011-02-03
            > solaris-2: 34,13 M dated 2011-0203
            > solaris-3: 691,0K dated 2012-11-0
            >
            > may I safely destroy these old boot environments?

            The usual rule of thumb is to keep at least one working previous
            environment, just in case. So you can remove the older ones, yes. Also
            note that when you upgrade the rpool's zpool or zfs versions, previous
            BE's that don't support it will be unusable, so they can be destroyed.

            > solaris-4: NR / 110,85 G 2012-11-06, which is obviously the one I use,
            > but the partition is larger: where are the missing 33G ?

            The calculation for space use are very confusing. Space use that is
            shared between datasets is not shown. So, check the free space before
            and after you destroy the oldest BE's: you will see it grows more than
            expected.

            > I see first
            > rpool used: 127G, avail 9,49G, refer 98K mounted on /rpool
            > rpool/root 110G 9,49G 31K legacy
            >
            > Next I see mounted rpools named after the BEs, plus unmounted ones called
            > rpool/ROOT/solaris-4@install
            > rpool/ROOT/solaris-4@2012-02-03-08-...
            > rpool/ROOT/solaris-4@2012-02-03-09-...
            > rpool/ROOT/solaris-4@2012-11-06-10-...
            > rpool/ROOT/solaris-4@2012-11-06-11-...
            > What is in there and what is their use The used space in these rpools is
            > small but the last column , called refer, says 3,34G, 97,7G, 98,8G,
            > 14,3G, 14,3G. What does that column mean
            > and finally dump, export export/home, export/home/ml and swap
            > What should I do to recover disk space ?

            Those snapshots are probably those using most of the space. If you don't
            need them anymore, you can remove them. The same thing about shared
            space applies. «Refer» is how much usage they refer to (some of it can
            be shared with others), «used» is the space that is specific to them. So
            destroying them will recover some value between the two.

            > du -ks / says 32.149.02 = 32,15G + 9,49 free = 41,64 so where are the
            > missing 90 GB?

            In the snapshots, du cannot account for them.

            Laurent
          Your message has been successfully submitted and would be delivered to recipients shortly.