A real problem with synthetic media

Real life comes at you fast. Fake life comes even faster.

Content creators, marketers, corporate bloggers and others are rushing to take advantage of the new synthetic media trend.

You can see why. Art created with Artificial Intelligence (AI) enables a more flexible and original alternative to old stock photography. And AI Content Generators, ChatGPT in particular, can literally Write decent quality blog posts, ads and marketing content in seconds,

The year 2022 has been the year of synthetic media tools going mainstream.

Much of the credit for this sudden turn towards synthetic media by millions goes to a San Francisco-based company called OpenAI. The company, which is a for-profit firm owned by a non-profit – both called OpenAI – was founded by Sam Altman, Elon Musk, Greg Brockman, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services, Infosys and was done. Backed by YC Research and $1 billion by Microsoft. OpenAI gets credit because it is responsible for both Dal-e 2 And chatgptServices that put both AI art and Supernatural AI chat on the map.

Hundreds of new products and online services have emerged in recent weeks that make these basic tools easier to use. But OpenAI is at its core.

The Real Problem With Synthetic Media

Darren Hick, Professor of Philosophy at Furman University recently warned on facebook Teachers and professors can expect a “flood” of homework essays written by ChatGPT.

We can also expect “cheating” by company content creators.

Public synthetic media tools based on DALL-E 2 and ChatGPT save time and money by generating quality content faster. Companies are already using it for social posts, blog posts, auto-replies, and illustrations.

Synthetic media promises a very near future in which ads are custom generated for each customer, super realistic AI customer service agents answer phones even in small and medium sized companies, and all marketing, advertising and commercial imagery is powered by AI. is generated, rather than human photographers and graphics people. The technology promises AI that writes software, handles SEO, and posts on social media without human intervention.

Great, isn’t it? The trouble is that some people are thinking about the legal ramifications.

Let’s say you want your company’s leadership to be presented on the “About Us” page on your website. Companies are now pumping existing selfies into AI tools, choosing a style, then generating fake photos that all look like photos taken in the same studio with the same lighting, or at the same time. are painted by the artist with the same style and palate of colours. But styles are often “learned” by AI by (as in legal) processing the intellectual property of specific photographers or artists.

Is that theft of intellectual property?

You also run the risk of publishing content that is similar or identical to ChatGPT content published elsewhere – at the very least being downgraded in a Google search for duplicate content and most likely getting noticed (or sued) for plagiarism Used to be.

For example, let’s say a company uses ChatGPT to create a blog post with minor editing. Copyright may or may not protect that content, including the AI-generated bits.

But then a competing company tasks ChatGPT to write another blog post, which generates language similar to the expression of the first. After minor editing, that copy goes online.

So who is copying whom? Who has the right to common languages ​​in each case? OpenAI? first poster? both?

It may be that if the second ChatGPT user never viewed the first user’s content, it is not technically plagiarism. If so, we could be facing a situation where hundreds of sites are getting the same language from ChatGPT but no one is technically copying someone else.

Adobe is accepting submissions of AI-generated art, which they will sell as stock “photography”. and claim ownership of the images with that arrangement With the intention of preventing others from copying and using them without payment. Do they have or should they have the right to “own” these images – especially if their style is based on the published work of an artist or photographer?

Crossing a Legal Red Line?

The biggest legal risk may be the blind publication of outright errors, which ChatGPT is notoriously capable of. ,Furman Professor Hick, Caught a student using Chat GPT because his essay was flawed and completely wrong.)

It may also generate defamatory, offensive or libelous content or content that violates anyone’s privacy.

When AI’s Words Infringe, Whose Crime Is It?

ChatGPT allows its output to be used, but you must state that it is AI-generated content.

But copyright cuts both ways. Most of ChatGPT is generic and boring where as there are many sources on that topic. But on subjects where the sources are few in number, ChatGPT itself may be infringing copyright. I asked ChatGPT to tell me about my wife’s business, and the AI ​​described it perfectly – in my wife’s words. ChatGPT’s terms and conditions allow the use of its output – in this case, it claims to allow the use of my wife’s copyrighted expression, for which she granted permission to neither OpenAI nor its users.

ChatGPT is presented to the world as an experiment, and its users are contributing to its development with their input. But companies are already using this experimental output in the real world.

The problem is that important laws and legal precedents have not yet been written; Putting synthetic media into the world means that the law of the future will apply to the present material.

The decisions are just beginning. US Copyright Office recently ruled that a comic book using AI art is not eligible for copyright protection, This is neither a law nor a legal decision, but it is a precedent to be considered in future.

OpenAI greenlights use of DALL-E and ChatGPT outputs for commercial use. In doing so, it places a legal burden on users, who can be complacent about the appropriateness of use.

My advice is: don’t use synthetic media for your business in any way. Yes, use it to get ideas, learn, explore. But don’t publish AI-generated words or images — at least not until there is a known legal framework for doing so.

AI-generated synthetic media is arguably the most exciting area of ​​technology right now. Someday, it will transform business. But for now, this is a legit third rail you should avoid.

Copyright © 2022 IDG Communications, Inc.

#real #problem #synthetic #media

Weeo

Weeo

Leave a Reply

Your email address will not be published. Required fields are marked *