Quantcast
Channel: blarg
Viewing all articles
Browse latest Browse all 209

Echoes Of The I

$
0
0

Somehow this has been sitting in my drafts folder for two months. Two months! An eternity, in magpies-and-merry-go-rounds hype-cycle news time . Entire civilizations have risen and fallen, tectonic plates have pressed deep granite mountains into the skies and poured Pangean oceans into the newly fractured chasms far opposite. Empires and civilizations have risen from history’s ashes to ascend to Babel-like heights only to crumble to dust in turn, hollowed and scattered. Somehow the New York Times’ opinion section has been consistently terrible throughout.

In any case: I mentioned the other day that it is genuinely, sincerely embarrassing to listen to people I’ve admired, people I considered thoughtful and rigorous, talking about large-dataset statistical-repetition tools like they’re some inscrutable alien intelligence to be feared or worshipped. It’s like watching somebody you respect and love cowering in the corner, screaming in terror at a Roomba with googly-eyes on it, when you watched them stick those googly-eyes on it five minutes ago. My friend, my brother, how are you failing a Rorschach test that you invented for yourself?

This frustration escalated a few weeks ago when I was involved in a discussion about the idea of “regulating AI”. You know your life has taken a turn for the worse when you’re in a room with people who say “regulatory markets” with a straight face, but anyone who’s internalized one kind of magical economic fiction will have no natural defences against an idea like “AI risk”. There’s something about Chicago economics that leaves its victims intellectually immunocompromised, and it’s just not safe to leave those vulnerable people in an unventilated room where technological novelties have been offgassing; like rabies, by the time the symptoms are visible and they’re spouting nonsense like “AI will not only disrupt our regulated market economies; it will disrupt regulation itself“, it’s too late to treat.

With a keen eye, fortunately, you can spot the early warning signs; if you or someone you know is showing signs of distress – slurring their definitions, saying “AI innovation” over and over again, struggling to walk drunkenly on path from epistemology to policy – it may not too late to intervene.

(The core of this conversation was whether or not AI companies should be “self regulating”, and I guess at this point you can tell what my position about that is going to be, haha.)

Fortunately, the remedy is simple and easily applied: let’s get concrete about what this software actually is and what it actually does.

You find out quickly that the most politically useful tool in this entire AI discussion is the relentless mystification around that question, and the reason AI companies are desperate that policy-makers don’t pierce that veil is because of how little there is behind it. These enormous, expensive AI/ML systems are just statistical pattern recognition and nondeterministic repetition tools.

It’s a mouthful, sure, but if you started calling it “statistical repetition risk”, or saying “stochastic repetition will not only disrupt our regulated market economies; it will disrupt regulation itself”, well. It’s enough to make you think that this word salad isn’t quite as scary as, and when you realize that “Model size” is a metric chosen from incumbents’ marketing materials, maybe the urgency we’re being sold isn’t a real thing and the man behind the curtain is a lot more Willy Loman than wizard.

There’s real utility in here, I think. “Machine-assisted pattern recognition” already has deep roots in data-compression theory, and automating tedious things seems generally good. But it’s completely meaningless from a public policy perspective; the size of the database a computer program uses bears no connection to the decisions that software makes or accountability for the harm those decisions cause, even if in LLM case it’s an excellent proxy metric for the scale of information theft involved in creating it.

But once that urgency passes, the question of whether AI companies should be “self-regulating” answers itself. What companies, developing large-model ML tools, have demonstrated any sort of commitment to the rule of law or the common good that might come at the expense of near term profit? Have any of these fearmongering frauds ever had a humanitarian urge that survived a board member’s huffing their own farts and deciding it smells like there’s money in the couch? Of course not. The only AI companies that didn’t fire their “AI Ethics” teams the moment somebody threw a stack of twenties on the table are the ones that sneered at the idea of hiring them in the first place.

That’s only part of the story of course; and while I’m fond of saying “there’s no I in AI”, that’s not 100% true. There are two different kinds of Intelligence in the Artificial Intelligences of the world. The first is the programmers who’ve decided what data goes into their statistical models, and the second is the armies of underpaid people in Kenya paid pennies to keep the internet’s nightmares from appearing next to Important Brands.

That’s not an exaggeration. Most of what you’re being sold as AI is underpaid labour in developing countries. That’s the urgent question, not the one the AI people are trying to frighten you into obeisance with; your, our, complicity in ignoring the army of people enduring trauma on our behalf for pennies an hour so for the sake of the mediocre text and excess fingers of our spicy autocomplete.

Maybe that’s what matters. Maybe we shouldn’t let it slide, that what we’re really talking about here is people not being held to account, the total absence of liability in software for companies and individual developers, that’s the heart of this conversation.

I believe this flurry of AI harms legislation is a deliberately engineered distraction from that question, and that vesting regulatory authority in organizations that will throw any people they employ or principles they claimed to have in the trash the moment somebody throws fifty bucks on the ground in front of them will end exactly the way you’d think.

The idea of these things being sentient is a science fiction fantasy. But listen to the people who’ve brought this into the world, sincerely believing it to be some kind of new sentience, and saying “if it crosses a line we made up, we should kill it and everything like it”. Even if they’re right, how could you trust somebody like that with a houseplant, much less an important idea, much less public policy?

They’re not right, though, and they’re not really scared; they just need you to be. So they can avoid being held to account, so we don’t talk about the real locus of agency in software, and where responsibility for real-world harms should actually reside. To distract us from a more fundamental question, of whether or not software developers should be held accountable, in detail, for the real social and economic failures caused by the software we’ve created.


Viewing all articles
Browse latest Browse all 209

Trending Articles