Tue | Jan 20, 2026
Our Jamaica

Looking Glass Chronicles - An Editorial Flashback

Published:Tuesday | January 20, 2026 | 8:08 AM

Although Jamaica has not yet reported cases, the country should move quickly to adjust its laws to deal with harmful AI-related issues that may arise, including the creation and distribution of non-consensual sexualised deepfake images of real people. The Government needs to lead discussions toward amendments to the Cybercrimes Act to remove any doubt that such acts are criminal, while also exploring accountability for platforms that facilitate or host the content, given the serious emotional harm and reputational damage these images can cause.

Protect against AI sex pics

Jamaica Gleaner/14 Jan 2026

ALTHOUGH THERE are yet no complaints of the problem infecting Jamaica, the authorities should move to place beyond doubt that using artificial intelligence (AI) technologies to create, and distribute, non-consensual sexualised images of real people is not only wrong, but a criminal offence.

The Government should therefore invite discussions with key interest groups on the issue, to inform appropriate amendments to the Cybercrime Act. Given the rapidity of the development of AI technologies, and the distress and emotional harm their misuse in this fashion can cause, Parliament must not take too long to act.

However, it is not only the individuals who cause the generation of the images who should be answerable. The platforms that facilitate their creation, and host them, should also be accountable.

The debate over the use of AI to generate nude or near-nude pictures of real people – especially women and children – erupted in recent weeks after X, Elon Musk’s social media platform, made Mr Musk’s image-making chatbox, Grok AI, available to users.

Soon X, formerlytwitter, was deluged with thousands of sexualised images of women and children in various stages of undress – and poses. These were often based on the photographs the individuals themselves posted on the platform.

The deepfake pictures are often linked to the people’s original posts, helping to fuel hurt and embarrassment. And, not infrequently, even greater layers of deepfakes.

“Lives can and have been devastated by this content, which is designed to harass, torment, and violate people’s dignity,” Liz Kendall, Britain’s secretary for technology, told the country’s parliament on Monday, in announcing that she was making the creation of these deepfake images a“priority offence”under the UK’S Data Act. The law was passed last year but is only now taking effect.

MONETISING ABUSE

The“priority offence”designation means that creating sexualised images without the affected people’s consent is among the list of serious crimes that technology companies must proactively prevent from reaching consumers in the UK. A similar designation exists under the UK’S Online Safety Act in relation to the nonconsensual sharing, or threatening to share intimate images of individuals.

In the face of a growing backlash, X last week limited access to Grok AI, but told users that it would be available to X’s fee-paying subscribers.

Ms Kendall said this was insulting to victims. “... It is monetising abuse,”she said.

Further, X has essentially placed the burden for generating “illegal content” on users, rather than assuming responsibility for prevention, which, on its face, is in keeping with protections afforded under America’s Communications Decency Act.

It says: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

Nonetheless, X’s behaviour could place it on a collision course with global regulators, including Britain’s Ofcom, which has opened an investigation into the company’s deepfake images.

Other platforms have AI technologies with similar capabilities to Grok AI. However, they generally have systems to prevent users from creating and posting the kinds of content that have generated the controversy that now plagues Mr Musk’s X.

INHERENT DANGERS

Manipulation of images is, of course, not new. Photoshopping, for example, has long been a widely available tool for altering or enhancing photographs. Those technologies, though, required specialised skills; and the adjustment and enhancements they achieved were generally discernible to the trained eye.

There are two inherent dangers with the new AI technologies: a verbal or simple written command, without specialised training, is all that is required of the user to generate an image; and, critically, the generated image is seamlessly realistic.

In other words, the power to deceive with a computercreated image, a deepfake, has been democratised. And, as recent events have shown, as in this case, often for ill.

This newspaper appreciates the potential of AI technologies for Jamaica’s growth and development. But this must be in the context of respect for decency, societal norms, and the rights of individuals. Which is the precept of Section 9 of the Cybercrimes Act.

This makes it an offence to use a computer to send data to another person:

“(a) that is obscene, constitutes a threat or is menacing in nature; and

“(b) with the intention to harass any person or cause harm, or the apprehension of harm, to any person or property”.

These provisions have been applied to revenge porn and similar cases in the domestic judicial system. Given the realism of AI deepfakes, the courts would probably also interpret it as applicable to the non-consensual generation of sexualised images of known individuals. These provisions would be underpinned by provisions in the Sexual Offences and Offences Against the Person Act.

However, The Gleaner believes that the issue should be placed beyond all doubt with amendments to the Cybercrimes Act that make it clear that non-consensual, Ai-generated images which meet the tests for obscenity, threat, menace, illegal harassment or harm, cross a serious legal pale.

 

For feedback: contact the Editorial Department at onlinefeedback@gleanerjm.com.