Artificial Intelligence (AI)
Should I use AI for this?
Advice From XRUK Digital Team, January 2026
There are serious problems with Artificial Intelligence (AI). Read on if you are considering using AI for XRUK.
AI has environmental and ethical costs. Before considering how to use it safely, consider whether to use it at all. There are technical considerations too - assume anything you give it could be stored, exposed, or used to identify you or others. It also lies, sometimes by design.
This advice is about public AI services, whether free, or paid-for. which you can choose to interact with.
This advice is when, how and even if you should use AI in XRUK.
What is AI?
AI is the buzzword for computers simulating human comprehension, problem solving, decision making and creativity.
AI learns by analysing whatever information it finds on the internet and the way people interact with it.
There are software applications enabling anyone to chat with AI using a “chatbot” but many other applications use AI without us being aware.
Concerns about using AI
Ethical
Please follow the Pull the Plug campaign for much more on this.
- Energy and water use at the data centres, especially during AI training.
- Cultural bias e.g. white supremacy and capitalism.
- Automation of work will be used to dismiss many workers.
- AI development relies on low-paid workers globally to moderate content and refine outputs
- Almost all AI is purely profit driven for the few.
- Danger of untested and proven unsafe AI becoming integrated into our lives.
Security and risks to our work
- Platforms can track use and analyse content, and sometimes identify who is making a search
- AI can learn from the information provided to it
- XR work could leak, or be used to support prosecutions for intent or conspiracy Legal:
- Sharing internal personal data with AI may breach Data Protection law
- AI does not respect copyright.
- Processing any personal data needs to be legal, justified and transparent.
Accuracy
- AI is not reliable.
- AI can be used for trickery, fake facts (i.e. lies) and to impersonate a human.
Things to take in to consideration before using AI
- Can you reasonably do the work without using AI, if so what is the justification which balances the concerns above?
- Be careful what AI could learn. What could AI infer from your questions or information? And what if they knew all the questions everyone in XR was asking?
- How can you trust the outcomes to be fair, full and truthful? Check AI outputs for bias and completeness. If making decisions based on AI outputs – how can you check them in other ways?
- Have you permission to use personal or copyright material the AI has harvested or imitated? How do you record where & how you have got the information from? (XRUK has been threatened with lawsuits before about using copyright material. There are companies which specialise in making these threats.)
- Do not include any personal data in a query. Any information shared publicly or with AI must be completely anonymised.
- If we must use personal data externally – have we told people that’s how we are using their data and why, and did they agree to it?
- We still need justification for anyone’s personal data we hold in XR – even if the Internet or AI says its accurate and public. Talk with GDPR & Security about the data you want to hold, and do the data planning.
- If you think AI will help XRUK achieve its purpose – and you’ve considered points 1 - 6 above, then carry on. Check with GDPR & Security if you have any remaining concerns.
If you want advice about using AI in XRUK – please come to the Tech reception channel on Mattermost or reach out to tech@rebellion.earth
If you think you need to use AI for internal XR information, you could ask if Digital can run open source AI software. One example of this is the transcript & subtitling in our new Jitsi platform, running on our own servers – so we don’t leak meeting contents to Zoom (or whatever video meeting platform).