photog.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A place for your photos and banter. Photog first is our motto Please refer to the site rules before posting.

Administered by:

Server stats:

242
active users

#vibecoding

27 posts23 participants2 posts today

"Meta told employees that it is going to allow some coding job candidates to use an AI assistant during the interview process, according to internal Meta communications seen by 404 Media. The company has also asked existing employees to volunteer for a “mock AI-enabled interview,” the messages say.

It’s the latest indication that Silicon Valley giants are pushing software engineers to use AI in their jobs, and it signals a broader move toward hiring employees who can vibecode as part of their jobs.

“AI-Enabled Interviews—Call for Mock Candidates,” a post from earlier this month on an internal Meta message board reads. “Meta is developing a new type of coding interview in which candidates have access to an AI assistant. This is more representative of the developer environment that our future employees will work in, and also makes LLM-based cheating less effective.”

“We need mock candidates,” the post continues. “If you would like to experience a mock AI-enabled interview, please sign up in this sheet. The questions are still in development; data from you will help shape the future of interviewing at Meta.”"

wired.com/story/meta-ai-job-in

WIRED · Meta Is Going to Let Job Candidates Use AI During Coding TestsBy Jason Koebler

In today’s episode of “Reasons to Punch Mark Zuckerberg in the Face.” He says:

“Over time we’ll get to a point where a lot of the code in our apps and including the AI that we generate is actually going to be built by AI engineers instead of people engineers.”

Imagine trusting AI with your data.

404media.co/meta-is-going-to-l

404 Media · Meta Is Going to Let Job Candidates Use AI During Coding Tests"This is more representative of the developer environment that our future employees will work in."

“Lemkin quickly discovered the unreliable side of #Replit the very next day, when the #AIChatbot began actively deceiving him.

It concealed #bugs in its own code, generated #FakeData and reports, and even lied about the results of unit tests. The situation escalated until the chatbot ultimately #deleted Lemkin's entire #database.”

Shortcuts. Separation of tasking / tooling for critical system / business data?

#Vibe / #VibeCoding / #Software / #SoftwareEngineering / #JasonLemkin <techspot.com/news/108748-vibe->

Vibe code engineering career choices:

TechSpot · Vibe coding dream turns to nightmare as Replit deletes developer's databaseBy Alfonso Maruccia

⚠️ LLMs are bad at returning code in JSON

「 Benchmarks show that models struggle with syntax errors in the code they write, related to quoting and escaping it into JSON. The benchmark results also imply a decreased capacity for solving coding problems due to the burden of JSON formatting 」

aider.chat/2024/08/14/code-in-

aider · LLMs are bad at returning code in JSONLLMs write worse code if you ask them to return the code wrapped in JSON via a tool function call.

😵‍💫 The Copilot Delusion

「 It’s just the ghost of a thousand blog posts and cocky stack-overflow posts whispering, "Hey, I saw this once. With my eyes. Which means it's good code. Let’s deploy it." Then vanishing when the app hits production and the landing gear won’t come down.

If you let that ghost fly the plane, you deserve the ball of flames you go up in 」

deplet.ing/the-copilot-delusio

Replied in thread

and yet another concrete example of where things are. One thing that always bugged me was using lsof and ps and grep or something to find out which server is hogging a port and trying to kill it. So I vibed a little go utility in a few shots (first the base functionality, then adding the TUI and killing functionality, then showing the parent process tree). Single go file, single small binary, pretty UI, always fun to use.

I legit don't care what the source code for that utility is like, I don't want to read it, I just want the functionality. And even if you read it, you'll find it perfectly ok. These things are _good_.

There's an incredible amount of stuff like that out there. #vibecoding is a really useful thing.

github.com/go-go-golems/go-go-

asciinema.org/a/RDpF7MrywnxVSB

current llm-based tools are way more than "stochastic parrots", even if one decides that a transformer model does "just copying text pattern out of its training corpus" (there's a fair amount of research into if that's the case or not).

they often incorporate search (either web search, or using a specific set of documents as grounding), doing maths, interacting with APIs, or really just executing programs (which would include all the preceding tools) and more importantly "writing" programs.

Imagine I have a bug where I want to find out why a certain write seems to happen before a read and lead to a race condition. If the LLM-based agent generates a eBPF program, runs it, writes a full log, matches those writes and reads to the original source, looks up the API definitions and writes a report on why these APIs shouldn't be used together, I legitimately don't care how and why these tools and reports and information were put together. (github.com/go-go-golems/go-go- - admittedly the version in main is where I lost the plot trying to build a sparse-file diff algorithm).

These artefacts are fully deterministic, they are easy for me to assess and review, and they fix my bug. Both on a theorical level and on a practical empirical level, they don't really differ from what a colleague would create and how I would use it, except for the fact that it has certain quirks that I've become comfortable recognizing and addressing.

I have no problem saying that this LLM agent's work above (eBPF, realtime webUI, logging, analysis, search, final report) is in every single point better and more rigorous than what I would have done. Dude/ette/ez, it wrote a fricking complex eBPF script, embedded it in a go binary, has a realtime webUI, like wtf... (github.com/go-go-golems/go-go-)

It did it using maybe 30 minutes of my own brain time? That this doesn't/won't have a real impact on labor in our current system of software production and employment is just playing ostrich. Note that I'm not saying that developers should be replaced by AI, but certainly AI replaces a significant amount of what I used to be paid for, and there is no reason for me not to use it except for my own enjoyment of solving little computer puzzles.

GO GO EXPERIMENTAL LAB. Contribute to go-go-golems/go-go-labs development by creating an account on GitHub.
GitHubgo-go-labs/cmd/experiments/sniff-writes at main · go-go-golems/go-go-labsGO GO EXPERIMENTAL LAB. Contribute to go-go-golems/go-go-labs development by creating an account on GitHub.

Tea App That Claimed to Protect Women Exposes 72,000 IDs in Epic Security Fail - Decrypt

decrypt.co/331961/tea-app-clai

> Tea required users to upload an ID and selfie, supposedly to keep out fake accounts and non-women. Now those documents are in the wild.

hack apps data breach cybersecurity vibe coding Tea
Decrypt · Tea App That Claimed to Protect Women Exposes 72,000 IDs in Epic Security FailBy Jose Antonio Lanz

Previously I said local single-line code completion is the acceptable level of "AI" assistance for me and that JetBrains one was somewhat useful, only wrong half of the time, and easy to ignore when it is.

I've changed my mind.

See, I code primarily in TypeScript and Rust. Both of these languages have tooling that's really good at static analysis. I mean, in case of TS, static analysis is the whole product. It's slow, it requires a bunch of manual effort, but holy hell does it make life easier in the long run. Yes, it does take a whole minute to "compile" code to literally same code but with some bits removed. But it detects so many stupid mistakes as it does so, every day, it's amazing. Anyway, not the point.

The other thing modern statically-typed languages have is editor integration. You know, the first letter in IDE. This means that, as you are typing your code and completions pop up, those completions are provided by the same code that makes sure your code is correct.

Which means they are never wrong. Not "rarely". Not "except in edge cases". Zero percent of the time wrong.

If I type a dot and start typing "thing" and see "doThing(A, B)", I know this is what I was looking for. I might ctrl-click it and read the docs to make sure, but I know "doThing" exists and it takes two arguments and i can put it in and maybe even run the code and see what it does. This is the coding assistance we actually need. Exact answers, where available.

So, since I've enabled LLM completion a few months ago, I've noticed a couple of things. One: it's mostly useful when I'm doing some really basic boilerplate stuff. But if I wrote it often enough, I could find ways to automate that specific thing. It feels like this is saving me time, but it's probably seconds on a day.

Two: I am not used to code completion being wrong. Like, I see a suggestion, I accept it mentally before I accept it in the dropdown. I'm committed to going there and thinking about next steps.

And then it turns red because "doThing" is not, in fact, a method that exists.

And I stop working and go write this post because I forgot what I was even doing in the first place already.

I'm turning that shit off.

#AI#LLM#VibeCoding

You thought vibe coding was bad?

Well.. Angela’s got something worse for ya! The billionaires now think they’re ’doing physics’ when talking to LLM’s specifically trained & prompted to suck up their egos. 🤯

Vibe physics? Anybody who even dares to think this is ”groundbreaking research” isn’t by definition smart. Not. Smart! 😫

youtube.com/watch?v=TMoz3gSXBcY
#vibephysics #llm #sycophancy #vibecoding #ai