LinkedIn says if you share fake or false AI-generated content, that’s on you
More and more companies are warning users not to rely on AI
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
LinkedIn is passing the responsibility onto users for sharing misleading or inaccurate information made by its ownAI tools, instead of the tools themselves.
A November 2024 update to its Service Agreement will hold users accountable for sharing any misinformation created by AI tools that violate the privacy agreement.
Since no one can guarantee that the content generative AI produces is truthful or correct, companies are covering themselves by putting the onus on users to moderate the content they share.
Inaccurate, misleading, or not fit for purpose
ThE update follows the footsteps of LinkedIn’s parent companyMicrosoft, who earlier in 2024 updated its terms of service toremind users not to take AI services too seriously, and to address limitations to the AI, advising it is ‘not designed intended, or to be used as substitutes for professional advice’.
LinkedIn will continue to provide features which can generate automated content, but with the caveat that it may not be trustworthy.
“Generative AI Features: By using the Services, you may interact with features we offer that automate content generation for you. The content that is generated might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes," the updated passage will read.
The new policy reminds users to double check any information and make edits where necessary to adhere to community guidelines,
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“Please review and edit such content before sharing with others. Like all content you share on our Services, you are responsible for ensuring it complies with our Professional Community Policies, including not sharing misleading information.”
The social network site is probably expecting its genAI models to improve in future, especially since it nowuses user data to train its models by default, requiring users to opt out if they don’t want their data used.
There was pretty significant backlash against this move, as GDPR concerns clash with generative AI models across the board, but the recent policy update shows the models still have a fair bit of training needed.
ViaThe Register
More from TechRadar Pro
Ellen has been writing for almost four years, with a focus on post-COVID policy whilst studying for BA Politics and International Relations at the University of Cardiff, followed by an MA in Political Communication. Before joining TechRadar Pro as a Junior Writer, she worked for Future Publishing’s MVC content team, working with merchants and retailers to upload content.
A new form of macOS malware is being used by devious North Korean hackers
Ulefone Armor 27T Pro rugged phone review
As if Intel didn’t have enough to worry about, Nvidia might be about to jump into the PC processor market