An excellent counter to the usual FUD

…when it comes to machine learning, AI, and algorithmic decision-making (or, as they say in the 'verse, cybermagistry) in general:

And in particular, that perennially annoying part of it that goes by the name of status quo bias. Always remember, you don’t have to be perfect to be adopted, you just have to be better than what we have now… assuming the decision-makers are rational.

If they’re human, though, you can count on the real ills of the present being heavily discounted compared to the potential ills of the future, in order to best prevent the adoption of even the most obviously beneficial ideas. :facehoof:

(This lesson also goes out to advocates of space colonization, autonomous electric vehicles, and any government form that isn’t exactly what we have now, all of whom are mighty tired of proving that their technologies are within delta of the Platonic ideal of perfection. Just like NASA, SUVs, and Congress.)

1 Like

The currently popular “pattern-continuation” process honestly seems like a bit of a shortcut to me.

The algorithm has no context for why it would make certain decisions, it just gets tossed a bunch of data and an imperative to find patterns and continue them. Hence the bail-bot that rated black first-time offenders a higher risk than whites with rap sheets.

To be fair, I arbitrarily strongly prefer sexism, racism, and all other human -isms over universal paperclip maximisation. That’s not to say I’m against ML or AI but the mainstream press and even a large proportion of experts still don’t look at these fields through a safety lens first.

To quote Charles Babbage:

On two occasions I have been asked, — “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

2 Likes

On the one hand, I share that particular preference.

On the other hand, the potential problems of that sort of general seed AI are a whole other scope than the current and proposed uses of AI/machine learning.

On the gripping hand, though, while no-one’s done anything historically that could match a successful universal paperclip-maximizer, while we’re worrying about what lesser decision-making AIs might do - well, in the same way that it isn’t correct to compare the performance of autonomous vehicles to the hypothetical perfect human driver when setting policy about them, it’s also not correct to compare the performance of management/government AIs to the hypothetical perfect human decision-maker.

(And, indeed, one should bear in mind the already existing examples of the alternatives: we should not condemn/outlaw/apply undue precautions to AI decision-making because of “Skynet”, “Colossus”, or “John Henry Eden”, unless we are also willing to do the same thing to human decision making because of Stalin, Hitler, and Pol Pot.

This latter is an argument that we are hilariously careless in re empowering humans and we should do something about that just as much as it is an argument that we should relax where AI safety is concerned, of course, but the point is that we really can’t have it both ways.)

One might argue that, for all the damage humans like Stalin, Hitler, and Pol Pot managed to inflict upon mankind, they were fundamentally on the same level of thought and capability (either in the conventional or in the toposophic sense of the word; I don’t know how well the OA model holds water but I like to entertain the thought that it does) as the rest of us, and, therefore, the extent of their damage was limited by it, while with AGI the only limits we can count on for sure are the ones imposed by physics, and those tend to be very, very generous compared to ours.

That being said, I’m very much against the idea of real research being hindered based on fictional evidence.