Having made my living writing and telling stories and, now, leading a team of talented content creators at CoinDesk, my first response to the latest announcement from OpenAI was one of horror.

If you missed it, OpenAI last month released “GPT-3,” the latest version of its language-generating AI tool. Countless test runs have shown its impressive ability to scribe entire essays, produce app wireframes and even write software code in response to just a few words of instruction. Armed with GPT-3 and, later with subsequent versions 4, 5, 6, etc, artificial intelligence is on its way to becoming an adept and even polished content creator.

The upshot: we writers are not immune to the robots.

Now that I’ve accepted that fate, it’s time to turn to the even bigger questions this raises for society as a whole:

Foremost, when (not if) semi-autonomous AI applications are churning out the majority of the content we consume, who owns it? Who is responsible if it results in fake documents or news? (What does “fake” even mean in this context?) Who can or should be held liable for defamation or other consequences of its speech or creative expression? And how do we divide up the rights and sub-rights to AI-produced derivative works whose content or ideas are built on those of a previous author or inventor?

#ai #digital-media #copyright #blockchain

Who Is Responsible for What AI Creates?
1.20 GEEK