One of the dumber ideas about super intelligence is Roko's Basilisk - the evil super-AI of the future, so super evil and powerful, that it is able - from the future - to compel us to create it.
It's a silly contrived idea, not because it's childish science fiction, but because it wastly overestimates the intelligence it would take to fuck everything up.
We're doing fine on our own without super intelligence in that department - and there are tons of lesser technological creations.
One of the summer's big releases in the field is the API to OpenAI's GPT3 model - the beta tokens are getting into interesting hands and things are happening.
There are claims of code-generating demos, there are all kinds of things - but my favorite so far - also because I trust the source - is Mario Klingemann's experiments with making GPT3 do literary pastiche.
In a remarkable twitter thread, Klingemann shares a couple of essays in the style of Jerome K. Jerome - it's quite something. They are generally 3-6 pages long, the language is generally good, and what's even more impressive it's not just the words or lines that are quite good - it's the pages. GPT3 successfully lands half-page and full page arcs. Here's a passage where GPT3 asks a question - unfolds a big explanation and finishes in style by closing with an answer.
"Do they not find out," I said, "that a twitter on mathematics would be as useful as a twitter on Shakespeare?" "No," he said, "they are two distinct subjects. I will explain the difference by an analogy."
"Do," I said.
"Well," he said, "suppose I had a ring with a note inside it that you were to open in the event of my death. In that case I could make the note of any length I liked."
"That is just the point," I said.
"I could have a long note that you would only open after many years, and you
would find it full of the most interesting information about that foreign country
you are always telling me about. Or I could have a very short note, quite a twitter,
about going out to walk in the country with you to-morrow. The short note would
convey just as much information as the long note, if you got it at once, but if you
waited for the long note you would get information about foreign countries for
"Oh, yes," I said.
"Now, in the same way," he said, "mathematics are the long notes and their
application to daily life is the short notes."
"I understand that perfectly," I said.
There are certainly elements of nonsense here, but I find the ability of GPT3 to shape a sequence like this at all, quite sensational. It has a certain flair to it, and the rhythm and timing is great - nonsensical subject matter be damned.
It's clearly more coherent than say, Donald Trump, and this is where my problem with Roko's Basilisk begins.
My nightmare isn't a super evil super intelligence - it's an evil minded doofus faking his way with a GPT3-level nonsense-generator. Imagine asking GPT3 policy questions and just going with that - plus a tiny bit of whatever stupid grift you had in mind. We don't actually have to imagine it - you can see it in action - with Donald Trump as the doofus - using the much simpler GPT2 model here. A horrible person with access to a round the clock automated debate club winner - there's a nightmare I can relate to.