#Development #Approaches
Proudest of these 128 kilobytes · Constraint-driven web development to the extreme https://ilo.im/165bzl
_____
#Constraints #Device #Browser #Accessibility #WebPerf #WebDev #Frontend #Fonts #SVG #JavaScript

#Development #Approaches
Proudest of these 128 kilobytes · Constraint-driven web development to the extreme https://ilo.im/165bzl
_____
#Constraints #Device #Browser #Accessibility #WebPerf #WebDev #Frontend #Fonts #SVG #JavaScript
The more we can deliver, the more resources we consume.
Animals "use efficiency to catalyze sprawl. […] The road to degrowth will require drastic social change."
@blair_fix wrote: https://economicsfromthetopdown.com/2024/05/18/a-tour-of-the-jevons-paradox-how-energy-efficiency-backfires/
What do you get if you combine #grammars, #constraints, #evolutionary algorithms, and #Python in one? A mighty fuzzer! Check out our latest #FANDANGO work, to appear at #ISSTA2025:
https://publications.cispa.de/articles/standard/FANDANGO_Evolving_Language-Based_Testing/28769252?file=53591066
To try out Fandango yourself, check out its home page: https://fandango-fuzzer.github.io/
“Beginning #programmers are often so eager to accomplish the first part of that definition—writing a #program to perform a certain #task—that they fail on the second part of the definition, meeting the stated #constraints.
I call a program like that, one that appears to produce correct results but breaks one or more of the stated rules, a #KobayashiMaru.” — #VAntonSpraul
@markwyner @photovince @nonehitwonder
128mb for two-sides of one perfect mixtape
#constraints
@charlotte There are certainly contexts in which Unicode unambiguously and demonstrably leads to security weaknesses and issues. See generally homoglyph attacks.
At the heart of the lie and damage is the existence of a message which appears to say one thing but in fact says something different. It's the very limited nature of 7-bit ASCII, 128 characters in total, which provide its utility here. Yes, this means that texts in other languages must be represented by transliterations and approximations. That's ... simply a necessary trade-off.
We see this in other domains, in which for the purposes of reducing ambiguity and emphasizing clarity standardisation is adopted.
Internationally, air traffic control communications occur in English, and aircraft navigation uses feet (altitude) and nautical miles (dstance) units.
Through the early 20th century, the language of diplomacy was French. The language of much scientific discourse, particularly in physics, was German. And for the Catholic Church, Latin was abandoned for mass only in the 1960s.
Trading and maritime cultures tend to creat pidgin languages --- common amongst participants, but foreign to all, as distinguished from a creole, an amalgam language with native speakers.
A key problem with computers is that the encodings used to create visual glyphs and the glyphs themselves are two distinct entities, and there can be a tremendous amount of ambiguity and confusion over similarly-appearing characters. Or, in many cases, glyphs cannot be represented at all.
Where the full expressive value of language is required --- within texts, in descriptive fields, and in local or native contexts, I'm ... mostly ... open to Unicode (though it can still present problems).
Where what is foremost in functionality is broad and universal understanding, selectinga small standardised and widely-recognised characterset has tremendous value, and no amount of emotive shaming changes that fact.
As an example, OpenStreetMap generally represents local place names in local language and charactersets. This may preserve respect or integrity to the local culture. As a user of the map, however, not knowing that language or charcterset, it is utterly useless to me. Or, quite frankly, anyone not specifically literate in that language and writing system.
It's worth considering that the characterset and language in question are themselves, adoptions and impositions: English was brought into Britain by invaders, the alphabet used itself is Roman, based on Greek and originally Phoenecian glyphs. English has adopted or incorporated terms from a huge set of other languages (rendering its own internal consistency ... low ... and making it confusing to learn).
International communications and signage, at airports, on roadways, in public buildings, on electronic devices, aims at small message sets and consistent, widely-recognised symbols, shapes, fonts, and colours. That is a context in which the freedoms of unfettered Unicode adoption are in fact hazardous.
(Yes, many of those symbols now have Unicode code points. It is the symbol set and glyph set which is constrained in public usage.)
And the simple fact is that a widely recognised encoding system will most often reflect on some power structure or hierarchy, as that's how these encodings become known --- English, Roman Alphabet, French, German, Latin, etc. Small minor powers tend not to find their writing systems widely adopted (yes, there are exceptions: use of Greek within the Roman empire, Hindu numbering systems). Again, exceptions.
Complexity expands to meet all constraint boundaries.
well… my #pleroma instance went #unusable after it succeeded to get more than 1000 peers and #db grown to about 4.6GB. this led to too long db requests, which terminate with errors like
15:45:27.704 [error] Postgrex.Protocol (#PID<0.493.0>) disconnected: ** (DBConnection.ConnectionError) client #PID<0.25671.7> timed out because it queued and checked out the connection for longer than 15000ms
15:45:27.705 request_id=… [error] Internal server error: %DBConnection.ConnectionError{message: "tcp recv: closed (the connection was closed by the pool, possibly due to a timeout or because the pool has been terminated)"}
I tried to run vacuum full analyze;
on my pleroma db, but it did not helped.
the second thing I did was run mix pleroma.database prune_objects
and again
vacuum full analyze;
. I’d lied if I’d said that nothing changed. well,
db size dropped from 4.6G to about 4G. but mentioned error still appears in log
and I can’t see my own local timeline almost every time I choose to show it in
applications.
my #vps (where pleroma lives) is not very powerful: 2 core KVM (~5200 BogoMIPS) with 3GB RAM and (probably) some relatively slow #HDD. I suppose some people would suggest invest more #money into my #IT infrastructure… thank you guys, :) I’ll think of it… but no, thanks, I’m good with what I have already.
I don’t want to reinit db, so I decided to #delete all events from all instances with more than 500 users (i.e. statistically the most spammy ones). if it will not help, most probably I’ll kill the db and will start from scratch. maybe. or just will try to block all fat instances and see what happen.
anyways, what I’ve learned so far is that you can run #Pleroma on #RaspberryPi or on #cheap vps just for #fun, but not for #production use. because sooner or later (sooner, I guess), you’ll bump into #memory or #disk #constraints, which will kill your instance #usability.
so, I’m to run the db #cleaning script. bye!