photog.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for your photos and banter. Photog first is our motto Please refer to the site rules before posting.

Administered by:

Server stats:

242
active users

#postgres

2 posts2 participants0 posts today

Hm. I normally use github.com/pgautoupgrade/docke whenever I need to deploy a #postgres container because I can just increase the version in my #Ansible playbooks to perform an upgrade.

Now I need #postgis for #dawarich and there doesn't seem to be such a convenient fork to ease future upgrades.

Am I missing something? Do I really want to bother future-me with manual upgrades that may include dumping, updating and restoring? Or would I just add these steps to my playbook?

A PostgreSQL Docker container that automatically upgrades your database - pgautoupgrade/docker-pgautoupgrade
GitHubGitHub - pgautoupgrade/docker-pgautoupgrade: A PostgreSQL Docker container that automatically upgrades your databaseA PostgreSQL Docker container that automatically upgrades your database - pgautoupgrade/docker-pgautoupgrade

I've used #MySQL before (last time was 2016, I think), but today I'm mostly onto #Postgres (and sometimes, #SQLite). I know PG is superior to MySQL is pretty much all aspect I could think of, but still, it seems that MySQL still has a quite big user base.

What's the catch? What am I missing here? Why would someone use MySQL over Postgres to build smt since Postgres [apparently] is better than MySQL in every single possible aspect?

It's a honest question. Please help me understand it – and perhaps, consider modern MySQL/MariaDB in next projects. :)

Je suis content, après des débuts difficiles en gestion de perf j'ai vraiment l'impression qu'on a atteint quelque chose de satisfaisant ici, avec des ressources plus que limitées

Je vais faire un article de blog sur le sujet mais je pense que c'est principalement du à la désactivation du nettoyage (vaccum) auto de la base postgres:
La suppression d'une entrée dans une base postgres n'entraîne pas la supppression réelle de la donnée sur le disque. Ce processus n'est effectuée qu'après X suppressions de données.

Le soucis c'est qu'il est très coûteux et la façon dont est codé Pleroma/Akkoma produit bcp de suppressions donc bcp de vaccum.

La base est donc continuellement busy à cause de ce processus, empêchant de répondre correctement aux requêtes des utilisateurs.

Ce que j'ai fait ici c'est de complètement désactiver ce processus, qui tourne maintenant une seule et unique fois chaque nuit, quand le serveur est down pour sauvegarde.

Je pense que j'aurai pu tweak les paramètres pour juste faire moins d'auto vacuum mais on aurait quand même eu le soucis assez régulièrement de requêtes qui prennent des plombes à cause de ça

Je pense qu'en l'état c'est un système satisfaisant !

#postgres #akkoma

Install and run a PostgreSQL database locally on Linux, MacOS or Windows. PostgreSQL can be bundled with your application, or downloaded on demand.

This library provides an embedded-like experience for PostgreSQL similar to what you would have with SQLite.

github.com/theseus-rs/postgres

Embed PostgreSQL database. Contribute to theseus-rs/postgresql-embedded development by creating an account on GitHub.
GitHubGitHub - theseus-rs/postgresql-embedded: Embed PostgreSQL databaseEmbed PostgreSQL database. Contribute to theseus-rs/postgresql-embedded development by creating an account on GitHub.
Replied in thread

@alexantemachina@mastodon.social

Warum meinst du?
Datenbanken nehmen ja durch eine volle Platte nicht Schaden, im Sinne bereits gespeichertes ist auf einmal weg, sondern, dass neue Daten nicht oder nur korrupt gespeichert werden.

Grundsätzlich verstehe ich auch nicht, warum bei
#HomeAssistant als Standard auf #SQlite gesetzt wird. Egal welches System mit Datenbank ich bisher installiert habe, wurde entweder direkt auf #mySQL, #MariaDB oder #Postgres gesetzt, oder zumindest empfohlen, nicht SQlite zu nehmen.

The use of #postgres notify/listen shouldn't be with multiple writers.

Immensely powerful utility, but terrible if you try to do multiple execution in parallel through transactions. 🫠 Although if you use triggers, you can have multi writers, but you lose atomic op.

recall.ai/blog/postgres-listen

www.recall.aiPostgres LISTEN/NOTIFY does not scalePostgres LISTEN/NOTIFY can cause severe performance issues under high write concurrency due to a global lock during commit. Learn why it doesn't scale and how to avoid outages.

In testing a postgres upgrade (13->15) for our mastodon-servcer, I run into a fatal error 'Your installation contains user-defined objects that refer tointernal polymorphic functions with arguments of type "anyarray" or "anyelement". Haven't been able to find a fix for this online, and there seems to be no logfile to pinpoint the cause. Anyone have any suggestions on how to fix this? #techquestion #mastoadmin #postgres [Edit: solved, see reply]

Dear #Fediverse, I need your recommendation:

I'm building a #website that in the end consists of a #Docker #NodeJS image/container and a #Postgres container. What options are there to host those containers on the #Web? I'm looking for a cheap or even free (?!) solution. I expect less than 50 users, so big hardware or scalability is not an issue. Would a virtual server with #minikube be a viable option? I'm a Docker and #Kubernetes newb so please bear with me.

pro tip for user interface designers:

if you have hundreds of millions of dollars of venture capital and you want to make a user facing data analytics tool of some kind and you think it's reasonable to ask an average human being to type this:

CAST('2023-05-01' AS TIMESTAMP)

to do literally anything with a date or time in your application's user interface, just stop right there. do not pass go, do not collect $200, and do not ever attempt to offer feedback to a UX designer ever again. something is deeply broken inside you that means there are certain mysteries of the universe that even the guys who designed the postgres command line can access that you will never know, and that's ok. You can still live a really rad life.

Yesterday I upgraded my Mastodon instance to go from PostgreSQL 15 PostgreSQL 17. The whole process was so simple that it almost seems silly to blog about it, but I did it anyway, in case anyone is interested:

blog.thms.uk/2025/06/upgrade-p

Special thanks to @ruud who told me about link mode in pg_upgradecluster which was the missing piece I needed in oder to get the upgrade done with my limited disk space.

blog.thms.ukUpgrading Mastodon from PostgreSQL 15 to PostgreSQL 17 - blog.thms.uk
More from Michael