photog.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for your photos and banter. Photog first is our motto Please refer to the site rules before posting.

Administered by:

Server stats:

242
active users

#ceph

1 post1 participant0 posts today
ij<p>My 3-node <a href="https://nerdculture.de/tags/Ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ceph</span></a> cluster is showing nice performance when running "bin/tootctl search deploy --only=accounts" during the Mastodon upgrade...</p>
Rachel<p><span>Physical test fit is good!<br><br>(The 3.5in drives are not populated yet) </span><a href="https://transitory.social/tags/Homelab" rel="nofollow noopener" target="_blank">#Homelab</a> <a href="https://transitory.social/tags/Ceph" rel="nofollow noopener" target="_blank">#Ceph</a> <a href="https://transitory.social/tags/Kubernetes" rel="nofollow noopener" target="_blank">#Kubernetes</a> <a href="https://transitory.social/tags/Minilab" rel="nofollow noopener" target="_blank">#Minilab</a></p>
Michael<p>Well of course. My Ceph cluster would produce a scrub inconsistency error while I'm 400 km away from the Homelab. Let's see what this is about.</p><p><a href="https://social.mei-home.net/tags/HomeLab" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HomeLab</span></a> <a href="https://social.mei-home.net/tags/Ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ceph</span></a></p>
Marianne Spiller<p>Upgrade des Prod-Clusters von <a href="https://konfigurationsmanufaktur.de/tags/Ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ceph</span></a> Reef zu Squid ✅ Und das sogar noch **vor** dem 1. August! 🎉 Lief unproblematisch.</p><p><a href="https://konfigurationsmanufaktur.de/tags/Proxmox" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Proxmox</span></a> <a href="https://konfigurationsmanufaktur.de/tags/Ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ceph</span></a></p>
OpenStack<p>Are you running <a href="https://social.openinfra.dev/tags/OpenStack" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenStack</span></a> and <a href="https://social.openinfra.dev/tags/Ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ceph</span></a>? Share your story at <a href="https://social.openinfra.dev/tags/Cephalocon" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Cephalocon</span></a> In Vancouver on October 28! The CFP is currently open and closes this Sunday, July 13 at 11:59pm PT.</p><p><a href="https://ceph.io/en/community/events/2025/cephalocon-2025/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ceph.io/en/community/events/20</span><span class="invisible">25/cephalocon-2025/</span></a></p>
Stuart Longland (VK4MSL)<p>Seems some of my disks have seen a few writes! `smartctl -a ${DEVICE}` stats for the <a href="https://mastodon.longlandclan.id.au/tags/Ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ceph</span></a> and virtual machine host cluster here.</p><p>The 2TB SSDs are OSDs in Ceph, the others are OS disks or local VM cache disks. Ceph is moaning about the disks in "oxygen" and "fluorine".</p><p>Numbers here are Total Drive Writes in GB.</p><p>```<br>$ for h in *; do for d in ${h}/tmp/*.txt; do lbas=$( grep -F Total_LBAs_Written ${d} | cut -c 88- ); if [ -n "${lbas}" ]; then dev=$( basename "${d}" .txt ); dev=/dev/${dev##*-}; sect=$( grep '^Sector Size' ${d} | cut -c 18-21 ); printf "%-20s %8s %6d %s\n" ${h} ${dev} $(( ( ${lbas} * ${sect} ) / 1024 / 1024 / 1024 )) "$( grep '^Device Model' ${d} )"; fi; done; done<br>beryllium.chost.lan /dev/sda 0 Device Model: WD Green 2.5 240GB<br>boron.chost.lan /dev/sdb 76370 Device Model: Samsung SSD 870 EVO 2TB<br>carbon.chost.lan /dev/sda 113892 Device Model: Samsung SSD 870 EVO 2TB<br>carbon.chost.lan /dev/sdb 157993 Device Model: Samsung SSD 860 EVO 2TB<br>fluorine.chost.lan /dev/sda 111939 Device Model: Samsung SSD 870 QVO 2TB<br>helium.chost.lan /dev/sda 100476 Device Model: Samsung SSD 870 QVO 2TB<br>hydrogen.chost.lan /dev/sda 184564 Device Model: Samsung SSD 860 EVO 2TB<br>hydrogen.chost.lan /dev/sdb 58602 Device Model: Samsung SSD 870 EVO 2TB<br>lithium.chost.lan /dev/sda 0 Device Model: WD Green 2.5 240GB<br>magnesium.chost.lan /dev/sda 0 Device Model: WD Green 2.5 240GB<br>neon.chost.lan /dev/sdb 146926 Device Model: Samsung SSD 860 EVO 2TB<br>nitrogen.chost.lan /dev/sda 99473 Device Model: Samsung SSD 870 EVO 2TB<br>oxygen.chost.lan /dev/sdb 108748 Device Model: Samsung SSD 870 QVO 2TB<br>sodium.chost.lan /dev/sda 0 Device Model: WD Green 2.5 240GB<br>```</p><p><a href="https://mastodon.longlandclan.id.au/tags/HomeServer" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HomeServer</span></a></p>
Stuart Longland (VK4MSL)<p>I'm a glutton for punishment this week… I'm poking the octopus again, moving to Ceph 18.</p><p>So far, we're doing the monitor daemons. A fun fact, they switched from LevelDB to RocksDB some time back and in Ceph 17, they dropped support for LevelDB.</p><p>I found this out when updating the first monitor node, it crashed. Refused to start. All I had was a cryptic:</p><p>ceph_abort_msg("MonitorDBStore: error initializing keyvaluedb back storage")</p><p>Not very helpful. The docs for Ceph 17 (which I didn't read as I was going from 16→18 direct, which the Ceph 18 docs says you _can_ do), merely state:</p><p>&gt; LevelDB support has been removed. WITH_LEVELDB is no longer a supported build option. Users should migrate their monitors and OSDs to RocksDB before upgrading to Quincy.<br>(<a href="https://docs.ceph.com/en/latest/releases/quincy/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">docs.ceph.com/en/latest/releas</span><span class="invisible">es/quincy/</span></a>)</p><p>How? 0 suggestions as to the procedure.</p><p>What worked here, was to manually remove the monitor node, rename the /var/lib/ceph/mon/* directory, then re-add the monitor using the manual instructions.</p><p><a href="https://docs.ceph.com/en/latest/rados/operations/add-or-rm-mons/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">docs.ceph.com/en/latest/rados/</span><span class="invisible">operations/add-or-rm-mons/</span></a></p><p><a href="https://mastodon.longlandclan.id.au/tags/Ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ceph</span></a></p>
Stuart Longland (VK4MSL)<p>We're mostly done now… just a few placement groups to re-map and backfill.</p><p>Looking at the steps to go next. I want to get to Debian 12 and Ceph 19.</p><p>There are no builds of Ceph 19 for Debian 11, which is what I'm on now. Looks like once again, I'll have to use the Debian repository Ceph binary, which is for Ceph 16.2.15. So I need to first comment out the Ceph repositories, and update to Debian 12, *then* I can jump to Ceph 19.</p><p>Would it kill them to build a couple for more than one Debian release?</p><p><a href="https://mastodon.longlandclan.id.au/tags/Ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ceph</span></a> <a href="https://mastodon.longlandclan.id.au/tags/Debian" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Debian</span></a></p>
Stuart Longland (VK4MSL)<p>Finally got to the post office to pick up the 6TB HDD for backing up the cluster, and last night took a back-up of all RBD volumes.</p><p>I'm part way migrated to Ceph v16 now. Monitors are updated, managers are updated, all the OSDs have been re-started. I've flipped the switch for OMAP usage stats migration… and now I re-start each OSD one by one and wait for it to convert and return to online status.</p><p><a href="https://mastodon.longlandclan.id.au/tags/Ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ceph</span></a> <a href="https://mastodon.longlandclan.id.au/tags/HomeServer" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HomeServer</span></a></p>
Marianne Spiller<p>Friendly reminder, dass <a href="https://konfigurationsmanufaktur.de/tags/Ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ceph</span></a> Reef (18.x) ab August EOL ist. Das Sommerloch wäre doch ein guter Moment, das Upgrade auf Squid (19.x) in Angriff zu nehmen... 😎</p>
Benoit<p>🚨 <a href="https://mastodon.benoit.jp.net/tags/Hiring" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Hiring</span></a> — Platform Engineer @ Rakuten Japan 🇯🇵<br>Join my team! I’m leading this Platform Engineering group focused on <a href="https://mastodon.benoit.jp.net/tags/Kubernetes" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Kubernetes</span></a> &amp; Linux systems.</p><p>💼 Stack: <a href="https://mastodon.benoit.jp.net/tags/Chef" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chef</span></a> <a href="https://mastodon.benoit.jp.net/tags/Terraform" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Terraform</span></a> <a href="https://mastodon.benoit.jp.net/tags/GCP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GCP</span></a> <a href="https://mastodon.benoit.jp.net/tags/OnPrem" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OnPrem</span></a> <a href="https://mastodon.benoit.jp.net/tags/RockyLinux" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RockyLinux</span></a> <a href="https://mastodon.benoit.jp.net/tags/Rook" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Rook</span></a> <a href="https://mastodon.benoit.jp.net/tags/Ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ceph</span></a></p><p>📍 Tokyo | 4 days/week in-office (only one day of WFH!)</p><p>If we know each other (Fediverse or IRL), I can refer you!<br><a href="https://mastodon.benoit.jp.net/tags/GetFediHired" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GetFediHired</span></a> <a href="https://mastodon.benoit.jp.net/tags/FediTechJobs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FediTechJobs</span></a></p><p>🔗 <a href="https://rakuten.wd1.myworkdayjobs.com/RakutenInc/job/Tokyo-Japan/Platform-Engineer---Merchandising-and-Advertisement-Department--MAD-_1026705" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">rakuten.wd1.myworkdayjobs.com/</span><span class="invisible">RakutenInc/job/Tokyo-Japan/Platform-Engineer---Merchandising-and-Advertisement-Department--MAD-_1026705</span></a></p>
Marianne Spiller<p>Mal wieder was ins <a href="https://konfigurationsmanufaktur.de/tags/Blog" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Blog</span></a> gepackt: zu <a href="https://konfigurationsmanufaktur.de/tags/Ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ceph</span></a> und den BLUESTORE_SLOW_OP_ERRORs, die da seit einem der letzten Updates herumschwirren. Spoiler: es ist weniger dramatisch, als ursprünglich angenommen, und man kann damit umgehen 🙂</p><p><a href="https://www.unixe.de/ceph-bluestore-slow-op-error/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">unixe.de/ceph-bluestore-slow-o</span><span class="invisible">p-error/</span></a></p>
JdeBP<p><span class="h-card" translate="no"><a href="https://sfba.social/@4d3fect" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>4d3fect</span></a></span> <span class="h-card" translate="no"><a href="https://mastodonapp.uk/@bytebro" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>bytebro</span></a></span> </p><p>Ceph, and a lot of DASD.</p><p>And as I noted elsethread, their scale factors all appear to be powers of 10, so the fucktonnes are apparently metric.</p><p><a href="https://epcc.ed.ac.uk/somerville-high-performance-data-intensive-storage-service-astronomy" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">epcc.ed.ac.uk/somerville-high-</span><span class="invisible">performance-data-intensive-storage-service-astronomy</span></a></p><p><a href="https://mastodonapp.uk/tags/VeraRubinObservatorium" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VeraRubinObservatorium</span></a> <a href="https://mastodonapp.uk/tags/LSST" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LSST</span></a> <a href="https://mastodonapp.uk/tags/astronomy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>astronomy</span></a> <a href="https://mastodonapp.uk/tags/Chile" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chile</span></a> <a href="https://mastodonapp.uk/tags/VeraRubin" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VeraRubin</span></a> <a href="https://mastodonapp.uk/tags/Ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ceph</span></a> <a href="https://mastodonapp.uk/tags/metric" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>metric</span></a></p>
Heinlein Support<p>Jetzt zum Nachlesen: SLAC-Vortragsfolien sind online!</p><p>Du warst auf der SLAC 2025 und möchtest nochmal in einen Vortrag eintauchen? Oder du konntest dieses Jahr leider nicht auf unserer Linux-Konferenz dabei sein? Die Folien zu den Vorträgen &amp; Workshops sind ab sofort online und für Dich in den jeweiligen Programmslots zum Download hinterlegt. </p><p>Schau vorbei und wirf einen Blick in Vorträge zu <a href="https://social.heinlein-support.de/tags/Proxmox" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Proxmox</span></a>, <a href="https://social.heinlein-support.de/tags/Ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ceph</span></a>, <a href="https://social.heinlein-support.de/tags/OpenVox" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenVox</span></a> &amp; Co. </p><p>👉 <a href="https://www.heinlein-support.de/slac/programm-2025" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">heinlein-support.de/slac/progr</span><span class="invisible">amm-2025</span></a> </p><p><a href="https://social.heinlein-support.de/tags/slac2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>slac2025</span></a> <a href="https://social.heinlein-support.de/tags/LinuxVortr%C3%A4ge" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LinuxVorträge</span></a> <a href="https://social.heinlein-support.de/tags/LinuxWorkshops" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LinuxWorkshops</span></a></p>
Stuart Longland (VK4MSL)<p>I was planning on doing a back-up of the <a href="https://mastodon.longlandclan.id.au/tags/Ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ceph</span></a> cluster today in preparation of migrating it to a later release… but of course, I've misplaced the PSU for my external 4TB back-up drive.</p><p>I noticed it was getting pretty tight on space too, and given it's now 5 years old, figured it was time for a new one. I've got a 6TB one coming.</p><p>In the meantime, I had another crack at the Ericsson F5521GW 3G module in the Toughpad.</p><p>If I boot with init=/bin/bash on the kernel command line, then modprobe cdc-acm, I see 3 ACM devices appear, and I can use `screen` with them.</p><p>I managed to remove the old PIN code on the SIM card I have in there but couldn't get the GPS going, most I got out of it was a "Operation not allowed" error.</p><p>Doing this in early init seems to be enough to convince the module to stay enumerated. Something though, tells it to go bugger off early in the boot sequence unless I talk to it first.</p><p>A bit of a nuisance to be honest… but I'm somewhat closer.</p><p><a href="https://mastodon.longlandclan.id.au/tags/ToughPad" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ToughPad</span></a> <a href="https://mastodon.longlandclan.id.au/tags/EricssonF5521gw" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EricssonF5521gw</span></a></p>
Stuart Longland (VK4MSL)<p>I have gotten through all but one… just doing one at a time. Last night I got two done. This morning I got one done before I headded to work.</p><p>During the work day, I had one scripted to wait for `ceph health` to report `HEALTH_OK`, wait a hour, then do its update unattended. I checked on it later from work and it had updated just fine.</p><p>So I scheduled the same to happen with one of my bigger nodes, "hydrogen" (they're all a periodic table naming scheme as I started with Intel Atoms). I've just finished it now, leaving one left: "carbon". It, like "hydrogen" has two disks.</p><p>I'll do the same there… wait for `HEALTH_OK`, wait a hour longer to be sure, then mark its disks out and begin the update.</p><p>I expect it'll do it overnight, and by morning I should have a Debian 11 install there.</p><p><a href="https://mastodon.longlandclan.id.au/tags/HomeServer" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HomeServer</span></a> <a href="https://mastodon.longlandclan.id.au/tags/Ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ceph</span></a></p>
Stuart Longland (VK4MSL)<p>20:56:10 up 514 days, 6:15, 1 user, load average: 0.94, 0.85, 0.64</p><p>Just updating a couple of my <a href="https://mastodon.longlandclan.id.au/tags/Ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ceph</span></a> nodes… it's been a long time on Debian 10, but I need to move on. Ansible no longer works with it.</p><p>Procedure so far is remove the old Ceph repositories (as they don't support anything beyond Debian 10) and lean on the fact that Debian 11 ships Ceph 14.</p><p>It's a slightly older release of Ceph 14, but it'll be enough… I can get the OS updated on all the nodes by doing them one-by-one… then when I get them all up to Debian 11… I should be able to jump to the next stable release of Ceph… then do another round of OS updates to move to Debian 12.</p><p>Ohhh boy.</p><p><a href="https://mastodon.longlandclan.id.au/tags/HomeServer" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HomeServer</span></a></p>
Rachel<p><span>Ok probably another week to go until I get these drives running<br><br>I should probably use that to figure out backups for this current pool.....<br><br>if the bulk storage poll in ceph works well enough I'll ship the current file server off to the parent's but for now I'll be able to use the LAN to do initial syncs<br><br>S3-compatible backup via Velero is one option, with MinIO, or </span><a href="https://garagehq.deuxfleurs.fr/" rel="nofollow noopener" target="_blank">Garage</a> running in a container backed by ZFS (I am not building a remote ceph cluster 😅<span>)<br><br>Anyone have thoughts/suggestions on backup strategy here? Probably backing up 2-3TB of data total (lots of photos) <br><br>I'll end up with local snapshots, and remote backups, huge bonus if they can be recovered without needing a ceph cluster to restore to in the case of something catastrophic. </span><a href="https://transitory.social/tags/HomeLab" rel="nofollow noopener" target="_blank">#HomeLab</a> <a href="https://transitory.social/tags/Ceph" rel="nofollow noopener" target="_blank">#Ceph</a></p>
Thoralf Will 🇺🇦🇮🇱🇹🇼<p>Test von <a href="https://soc.umrath.net/tags/ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ceph</span></a> auf meinem <a href="https://soc.umrath.net/tags/Proxmox" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Proxmox</span></a>-Cluster abgeschlossenen.</p><p>Es hat funktioniert, aber der Performance-Overhead ist gewaltig und steht in keinem sinnvollen Verhältnis zum Nutzen, der definitiv vorhanden wäre.</p><p>Also alles wieder auf zfs zurückgebaut.<br>Evtl. behalte ich einen kleinen Cluster für Backups, etc.</p>
Thoralf Will 🇺🇦🇮🇱🇹🇼<p>Eine Idee für das "Wie?" in meinem Kopf wäre:</p><p>1. Auf jedem Node eine Disk aus dem <a href="https://soc.umrath.net/tags/zfs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>zfs</span></a>-Mirror heraustrennen.<br>2. <a href="https://soc.umrath.net/tags/ceph" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ceph</span></a> mit den 3 Disks aufbauen.<br>3. Volumes von zfs auf ceph migrieren.<br>4. Alles prüfen.<br>5. zfs zerlegen<br>6. Die weiteren Platten ceph zur Verfügung stellen.</p><p>Ginge das?</p><p>Alternativ hätte ich auch noch 3 Platten rumliegen (1, 2 und 4 TB), mit denen ich eine initiale ceph-Umgebung aufbauen könnte, einen Node komplett umziehen könnte und mich dann Stück für Stück durcharbeite ...</p>