Image for post
Image for post
Photo: istockphoto.com

Will The Next Shakespeare Be a Robot?

A new A.I. text generator has ability to write convincing prose. Can an algorithm imitate one of the most complex of human abilities?

What if I tell you there’s a program that has the ability to write convincing texts? So much so, that one will only have to give it a prompt and the system will come up with the desired text effortlessly. A text so well-written it could masquerade as a piece created by a human. Sounds quite superb, doesn’t it?

Developed by OpenAI, a San Francisco-based research institute — co-founded by none other than the Silicon Valley entrepreneur himself, Mr. Elon Musk — a new AI text generator is the latest example of how great machine learning software is getting at human-esque activities. Yet, when it comes to a skill so human like writing, can an algorithm truly imitate one of the most complex of human abilities?

That’s one of the ultimate goals of OpenAI’s AI Text Generator. Fed with a dataset of 45 million webpages — mostly Reddit — the non-profit trained a large-scale language model capable of generating coherent and believable paragraphs of text without any explicit supervision. The new language model released on February 14th falls within the subfield of AI known as natural-language processing.

They quickly realized their breakthrough invention could become an evil powerful propaganda machine used by political malicious actors.

Originally, the researchers aimed to create a general-purpose language text algorithm capable of translating, summarizing text, improving chatbots’ conversational skills, and comprehending readings. Amateur writers and the like could benefit from such a program whose main premise is to free users from writer’s block.

However, they quickly realized their breakthrough invention, combined with alarming developments in machine learning techniques on synthetic imagery, audio, and video, could become an evil powerful propaganda machine used by political malicious actors or some authoritative regime — Oh! Ni Hao China.

Due to these concerns about the technology being used to produce deceptive or biased narratives at scale, the makers of the technology have refrained from releasing its open-source code to the public.

The decisions not to release the software isn’t out of proportion. In fact, the AI community is very aware of the sensitive current-state of research in AI. They have recently begun to create better technical and non-technical countermeasures against malign actors. In addition, they are establishing secure measures and ethical ground rules in order to guarantee the sharing of safety, policy, and standards research in the near future.

“What all of us have to do is to make sure we are using AI in a way that is for the benefit of humanity, not to the detriment of humanity.” — Tim Cook

In a paper entitled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” authored by the University of Oxford, the Center for a New American Security, the Electronic Frontier Foundation, and others conjointly acknowledged the potential dual-use of AI.

Some high-level recommendations laid out by the paper include pre-publication risk assessments for certain bits of research, selectively sharing some types of research with a significant safety risk among a small set of trusted organizations, and exploring how to incorporate norms into the scientific community that are responsive to dual-use concerns.

When automation is given so much freedom and autonomy should citizens be worried about the future misuse of machine learning technology and the potentially harmful impact in their lives?

Here’s a tangible example. While performing a test, the employees at OpenAI fed the system with the following prompt, “Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air.” The software auto-generated this coherent and quite disturbing sample:

“Russia said it had “identified the missile’s trajectory and will take necessary measures to ensure the security of the Russian population and the country’s strategic nuclear forces.” The White House said it was “extremely concerned by the Russian violation” of a treaty banning intermediate-range ballistic missiles. The US and Russia have had an uneasy relationship since 2014, when Moscow annexed Ukraine’s Crimea region and backed separatists in eastern Ukraine”.

The above text is a shred of obvious evidence on why the potential for abuse is so high. OpenAI’s concern comes amid growing fear from lawmakers and tech companies themselves about the recent ethical implications of the steady progress in the ever so-dominant technology. A.I. nowadays possesses the ability to distort reality like it has never been able to before in history.

As Jack Clark, OpenAI’s policy director told MIT Tech review, “If this technology matures — and I’d give it one or two years — it could be used for disinformation or propaganda […] We started testing it, and quickly discovered it’s possible to generate malicious-esque content quite easily.”

He’s not wrong, machine learning software has matured superfast in the past year making remarkable leaps. In an article I wrote last year about automated journalism, the technology used at the time to collect data in order to spew out financial and sport-related pieces was quite precarious — even amateur — in comparison to OpenAI’s brainchild and some recent advancements in algorithm using convolutional neural networks.

Inevitably, this makes us think about how rapidly machine-learning software that deals with language is improving itself. We might be at the point where the technology is either reaching the peak of an s-curve point meaning its exponential rise will soon be followed by a fall to a safe limit, or we might see its continued fast-paced acceleration on an exponential growth curve instead. Frankly, I put my money on the latter.

As we start seeing the foreseeable impacts of AI, it is of paramount importance that we start a conversation regarding the ethics behind intelligent machines. The tech community needs to begin implementing socially significant decisions to their products and their development. Otherwise, by not being transparent, they’ll run the risk their technology will cause mistrust and fear in the general population.

“Governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems. If pursued, these efforts could yield a better evidence base for decisions by AI labs and governments regarding publication decisions and AI policy more broadly.” — OpenAI’s Team

One particular solution to this issue could be cross-pollination across disciplines. This means supplying engineers with a better understanding of the liberal arts and social scientists with a wider grasp of technical skills. All bound up together, diverse actors could help restructure not only the whole educational system, but also the creation of more ethical products and platforms.

As Dov Seidman, founder and chief executive of LRN, a company which provides advising and educating on ethics to big tech, recently told the New York Times, “we need to scale “moralware” through leadership that is guided by our deepest shared values and ensures that technology lives up to its promise: enhancing our capabilities, enriching lives.”

We probably are not that far away from a time when machine-written software transforms our social, political, and economic systems. Fortunately, for now, when it comes to the potential threats of AI, time is still on our side. But as machine-learning continues to ferociously reach the summit of the uncanny valley, it’ll be just a matter of time before AI permeates every aspect of our society. As humans, we will for sure benefit from AI’s immensely positive applications, but one thing is certain, this will come at a cost.

Written by

Journalist and multilingual researcher at your service. More stories on https://itsorge.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store