╔══════════════════════════════════════════════════════════════╗
║ THE EMPROMPTED ║
║ AI-HUMAN COLLABORATION FOR HUMAN RIGHTS PROFESSIONALS ║
║ ║
║ ITU, GENEVA • DECEMBER 2025 ║
╚══════════════════════════════════════════════════════════════╝
In the 1980s, graphical interfaces meant almost anyone could use a computer.
Today, large language models mean almost anyone can ask a computer to help build tools,
analyse documents or produce visualisations in plain English.
Vibecoding was named Collins Dictionary's Word of the Year 2025. The term, used by Andrej Karpathy, describes a style of programming where you explain what you want in ordinary language and an AI system writes the code.
Why It Matters for Human Rights Professionals
• You can sketch and test tools yourself, without waiting for a full IT project.
• You can turn your expertise and questions into small, concrete applications for (almost) free.
• If you would like to scale up, you can have more informed conversations with technical teams - you arrive with a prototype, not only an idea.
• You ensure human rights by design from the start, not as an afterthought.
• Join the open source AI community by exploring and contributing to the vast collection of models, datasets, and code on platforms like Hugging Face and GitHub.
🚀 YOUR STARTER PROMPT
_
□
X
Copy This Prompt
This prompt shows what it now means to "program" with natural language: you describe the tool; an AI system generates the code.
"Create a complete web application for searching
UN treaty body documents with the following functions:
1. Paragraph-Level Extraction
- Extract the first 20 paragraphs from the UN report A/80/169 on AI in judicial systems
- Each paragraph maintains its original numbering (1-20)
- Full text preservation with proper formatting
2. Thematic Labeling System
- Each paragraph has been categorized with relevant UN human rights framework themes - come up with 6 most relevant.
Display in the output why and how did you select and defined categories.
3. Advanced Search Functionality
- Keyword Search: Find individual terms (e.g., "information," "speech")
- Thematic Filtering: Filter by specific human rights themes
- Highlight Results: Search terms highlighted in yellow
- Real-time Search: Instant results as you type
4. User Interface
- Modern, responsive design similar to OHCHR style
- Live dashboard with stats about filtered documents - theme distribution, term frequency, and bigrams
- Glass morphism effects and professional appearance
- Color-coded theme tags for easy identification
- Paragraph numbering system for reference
- Intuitive copy text and citation function
- Mobile-friendly responsive layout"
Where to Use This
LM Arena – free access to several cutting-edge models
Vibe-coding shines for simple, useful tools: a document search engine,
a lightweight analytics dashboard, or a short report generator.
Human rights experts often miss exactly these basics: decision/document search, filtering, and XLSX export.
Build step by step and shift towards your own code changes as you learn.
Over-ambitious first projects often end like this:
“Ship the simple thing first, then grow.”
🧪 FIVE APPS FROM ONE REPORT
_
□
X
What You Can "Vibecode" from the UN report
Using the report of the UN Special Rapporteur on the independence of judges and lawyers A/80/169 – AI in judicial systems: promises and pitfalls as a
reference point, you can describe practical monitoring tools in natural language and ask an AI model to generate prototypes.
Disclaimer – one-shot prototypes
Each app below was generated with a single, long one‑shot prompt:
one detailed instruction asking an AI coding model to produce a complete prototype in one go.
Each prompt refers to
A/80/169.
These apps are demonstration tools only.
How to Use This Section
Choose an app that matches your work (e.g. safeguards, language, legal aid).
Click its tile to open a short description and the one‑shot prompt.
Enjoy Live Demo or code yourself by copying the prompt into LM Arena. Treat the result as a prototype to critique and improve.
🛡️
Safeguards Checklist
Governance and oversight
🗣️
Language & Accessibility
Plain language & digital divide
🤝
Legal Aid & Workload
What to automate (or not)
🗺️
AI Use Landscape
Where courts already use AI
⚖️
Risk & Rights Explorer
Link AI uses to rights
Select a tile above to open a small window with the description, workshop prompt and (where available) a live demo.
🗺️ AI Use Landscape in Judicial Systems
Map where and how AI is used in courts and justice systems, with each example linked to a country
(e.g. Brazil's VICTOR, Mexico's Sor Juana, Nigerian access‑to‑justice tools).
Use this to identify patterns and gaps in how AI is entering judicial systems.
⚖️ Risk & Rights Explorer
Explore how different AI tools in specific countries may affect fair trial, non‑discrimination,
privacy and access to justice, based on situations inspired by A/80/169 and related reporting.
Treat this as a way to structure human‑rights questions, not as an automated assessment.
🛡️ Safeguards & Governance Checklist Builder
Turn recommendations from A/80/169 and regional instruments into checklists for different AI uses
(administrative tools, decision support, public‑facing tools, legal aid).
Use the generated checklist to support dialogue with ministries, courts and oversight bodies.
🗣️ Language & Accessibility Lens
Focus on language and access: AI‑supported translation, plain‑language rulings,
chat‑based legal first aid and who is excluded when tools assume connectivity, literacy or one language.
Use this to keep the linguistic accessibility on the table in AI discussions.
🤝 Legal Aid & Workload Explorer
Look at AI‑related initiatives in legal aid and court administration and classify tasks
as good candidates for automation, "assistive only", or clearly human‑only.
Use this to argue for more human lawyers where risk is highest, not fewer.
You are assisting human rights and justice policy experts.
Using your training data and any public sources you can access, including if possible the UN General Assembly report A/80/169 "AI in judicial systems: promises and pitfalls" and related UN documentation, build a complete single-page web application called "AI Use Landscape in Judicial Systems" with the following features:
1. Data model
- Create a small internal dataset of concrete examples of AI use in justice systems that are publicly documented in A/80/169 and similar UN reporting.
- For each example, store at least:
- an ID (e.g. A001, A002…)
- a short, 1–2 sentence description of the use of AI in a justice context
- the country where the tool is implemented (for example: Mexico for the "Sor Juana" system, Spain for "Carpeta justicia", India for "Haqdarshak", Latvia for the legal information wizard, Malaysia for the sharia procedures chatbot, Saudi Arabia for the court digital assistant, Nigeria for tools such as Podus, OpenLawsNig and Citizens' Gavel, Colombia for justice data initiatives, Brazil for the VICTOR system, the United States for tools like ShotSpotter, etc.)
- region (e.g. "Latin America", "Africa", "Europe", "global")
- type of AI use (e.g. case allocation, legal research, translation/transcription, predictive analytics, legal information/chatbots, case management, risk assessment, remote hearings, access-to-justice assistants)
- stage of proceedings (pre-hearing, hearing, post-hearing, cross-cutting)
- main justice problem addressed (e.g. backlog, access to justice, cost, consistency, transparency).
2. User interface
- Top statistics panel showing:
- number of examples per type of AI use (simple bar chart or counters),
- number of examples per region,
- number of examples per country.
- Left-side filters for:
- region / country,
- type of AI use,
- stage of proceedings.
- Main table where each row is one example with:
- ID,
- short description,
- country,
- region,
- AI use type,
- stage,
- justice problem (as small tags).
- When a row is clicked, open a detail view showing:
- full description,
- all tags,
- a small "Monitoring notes" box where you generate 3–5 questions a human rights monitor could ask (e.g. about impact on fair trial, non-discrimination, access to remedy, digital divide) in that specific country context.
3. UX requirements
- Modern, responsive design suitable for laptops and projectors.
- Use colour-coded tags for AI use types and justice problems.
- Make filters work instantly (client-side filtering on the dataset).
- The app must be self-contained: a single HTML file with inline CSS and JavaScript, easy to host as a static page.
If you are unsure about specific details of A/80/169, keep examples generic, rely on well-documented tools, and do not attribute controversial practices to named countries unless you are confident they are publicly documented.
You are assisting human rights officers who monitor the impact of AI in judicial systems.
Using your knowledge of the UN report A/80/169 "AI in judicial systems: promises and pitfalls" and related UN human rights standards, create a single-page web application called "Risk & Rights Explorer" with these features:
1. Synthetic paragraph set and categorisation
- Construct a manageable set of synthetic "paragraph-like" entries (for example 40–80 entries) that reflect:
- the kinds of situations and country examples described in A/80/169 (such as Brazil's VICTOR, Nigeria's Citizens' Gavel pretrial tools, India's, Colombia's and Austria's justice data initiatives, court chatbots in Latvia, Malaysia and Saudi Arabia, tools like ShotSpotter and COMPAS in the United States, SLPS in Poland, AI integration in Chinese courts, sandboxes in Azerbaijan, etc.),
- and similar UN reports on AI and justice.
- For each entry, store:
- an ID,
- a short description of the situation,
- the country or countries involved,
- the type of AI use (translation, triage, evidence analysis, case allocation, decision support, legal information, etc.),
- zero or more human rights impact categories, for example:
- judicial independence and impartiality
- fair trial / due process / equality of arms
- non-discrimination and equality
- privacy and data protection
- access to justice and effective remedy
- language, translation and digital divide
- economic and social barriers (costs, legal aid, infrastructure)
- transparency, explainability and accountability.
- For every assigned category, generate a one-sentence explanation in clear language:
"This situation affects [RIGHT] because…".
2. User interface
- Left filter panel: list all categories with checkboxes and counters (how many entries mention each).
- Additional filter for country so users can focus on one State or region.
- Main panel: scrollable list of entries where each item shows:
- the ID,
- country name(s),
- short snippet of the entry text,
- small colour-coded chips for the categories attached to that entry.
- When a user clicks on an entry, open a detail view showing:
- full text,
- list of categories,
- short explanations,
- a dynamic "questions for monitors" box with 3–5 follow-up questions they could ask States or courts in that particular country (for UPR, treaty bodies, special procedures, bilateral dialogue, etc.).
3. Monitoring export
- Allow the user to select one or more categories and/or a country and export:
- a CSV or JSON file with all matching entries and their metadata, and
- a short narrative summary highlighting main patterns and gaps found for the selected rights and country.
4. UX
- Use a clean, accessible layout with large fonts and clear contrasts.
- All logic should run in the browser on a static HTML page (no server required).
Keep the examples realistic but generic enough for training. Do not fabricate specific accusations about named States or individuals; frame them as "illustrative cases inspired by UN reporting".
You are helping justice ministries, courts and oversight bodies turn the UN report A/80/169 "AI in judicial systems: promises and pitfalls" into a practical governance tool.
Using your knowledge of A/80/169 and related UN human rights guidance and regional instruments (such as the Council of Europe AI convention and the EU AI Act), create a single-page web application called "Safeguards & Governance Checklist Builder" with the following functionality:
1. Safeguard library
- Create a structured library of safeguards and governance measures inspired by A/80/169, including:
- governance structures and oversight mechanisms for AI in judicial systems,
- human rights–based impact assessments and audits,
- transparency, documentation and explainability requirements,
- participation of judges, lawyers and affected communities,
- safeguards for judicial independence,
- limits on automation and requirements for human control,
- safeguards around procurement and public–private partnerships.
- For each safeguard, store:
- a short, plain-language checklist item,
- a more detailed explanation,
- an indication of which actors it is most relevant for (judges, ministries, court administrations, oversight bodies),
- an optional "illustrative countries" field listing countries where similar issues appear in UN reporting (for example Brazil, Nigeria, Pakistan, the United States, China, Poland, France, Azerbaijan, etc.).
2. Checklist builder
- Let the user choose a context from a dropdown:
- "Administrative support tools for courts",
- "Decision-support tools for judges",
- "Public-facing legal information tools",
- "Tools used by legal aid providers or public defenders".
- Let the user optionally select one or more countries they are monitoring.
- For each context, show:
- a tailored subset of safeguards most relevant to that context,
- grouped under headings like "Design", "Procurement", "Deployment", "Oversight".
- For each checklist item, allow users to mark:
- "Planned", "In place", or "Not applicable".
- Generate, on demand, a short narrative summary that:
- highlights key gaps,
- suggests priority actions,
- can be copied into a briefing note or mission report,
- optionally mentioning the country or countries the user selected.
3. UX
- Make the app easy to print or export as a PDF (printer-friendly styling).
- Use simple controls and labels – suitable for non-technical diplomats and justice officials.
- All data can live in JavaScript structures in the page; no external database is required.
Base all safeguards on the spirit of A/80/169 and UN human rights law. Keep the language cautious and avoid concrete allegations against specific States; frame examples as "inspired by practices described in UN reports".
You are supporting organisations and public officials who want to make legal and policy texts more accessible.
Create a single-page web app called "Language & Accessibility Lens" that focuses on transforming legal text into clearer versions, using examples inspired by the UN report A/80/169 "AI in judicial systems: promises and pitfalls".
IMPORTANT CONSTRAINTS FOR THIS DEMO
- Assume this app will be hosted as a static page on a free service with no backend.
- That means: in this workshop demo, users CANNOT paste their own text.
- Instead, you must:
- embed a small set of sample legal-style texts directly in the JavaScript,
- and provide a clear notice in the interface that explains:
"This demo works on built-in examples only. In a real deployment, you would paste your own legal text and the app would apply the same transformations."
1. Sample texts (internal dataset)
- Inside the code, create 5 short sample legal or policy fragments (3–8 sentences each), written in your own words, NOT as direct quotations, but inspired by the themes of A/80/169:
- Example themes could include:
- AI-supported translation and transcription in courts (e.g. India, Spain),
- plain-language rulings systems (e.g. "Sor Juana" in Mexico),
- WhatsApp-based legal first aid tools (e.g. Podus in Nigeria),
- online legal information wizards in Latvia or Malaysia,
- access-to-justice platforms supporting asylum seekers or low-income users.
- Each sample text object should contain:
- an ID (e.g. "EX1", "EX2"…),
- a short title,
- a short description of context (country, tool, justice issue),
- the "original" legal-style text (your own drafting, not copied from any real document).
- In the UI, expose at least 3 of these as options in a dropdown labelled e.g.:
- "Example 1 – AI translation for court hearings (India)"
- "Example 2 – Plain-language rulings (Mexico)"
- "Example 3 – WhatsApp legal helper (Nigeria)"
- The remaining examples can be accessible via additional options or buttons, but at minimum there should be 3 clearly visible examples in the dropdown.
2. Transformation modes (versions of the same text)
For whichever example is selected, the app should show several versions of the same content:
- Original version:
- Show the legal-style text exactly as defined in your internal dataset.
- Plain-language version:
- Rewrite the content in clear, everyday language for adults with no legal training.
- Keep all key rights, obligations, deadlines and conditions.
- Avoid jargon, Latin phrases and references to article numbers unless necessary.
- Prefer short sentences and active voice.
- Child-friendly version:
- Rewrite the text as if explaining the situation to a teenager (around 12–15 years old) who might be affected by the decision.
- Use simple words, concrete examples and clear statements about rights and where to ask for help.
- Avoid fear-based language while still mentioning important risks.
- Make it explicit that this is an explanation, not a full legal text.
- Local / Indigenous language / translation mode:
- Instead of automatically generating real Indigenous-language text (which could be inaccurate or culturally insensitive), create a special "local / Indigenous language" panel that:
- shows a short, simplified English version with labels like "Local language version – to be co-created with community translators",
- explains HOW a local team could translate or adapt the plain-language version into a local or Indigenous language,
- emphasises that this step must be done with native speakers and community experts, not only by AI.
- You may also include a simple example using a widely spoken language (for instance a short Spanish paraphrase for one example), but make clear that it is illustrative only.
3. Interface layout and interaction
- Layout:
- At the top, include:
- a short explanation of the tool ("Transform a legal-style text into plain language, child-friendly explanations and a local-language friendly template."),
- a **highlighted notice** that says something like:
"Demo limitation: in this workshop version you cannot paste your own text. Instead, choose one of the built-in examples. In a real version, you would paste your own document here and the app would apply the same transformations."
- Below that, add:
- a dropdown to choose between the 3+ built-in examples (showing short titles and country).
- Main area:
- Display the selected example in a multi-column or stacked layout:
- Column / card for Original text
- Column / card for Plain-language version
- Column / card for Child-friendly version
- Column / card for Local / Indigenous language guidance
- Provide toggles or checkboxes so the user can show/hide each version, allowing:
- side-by-side comparison (e.g. Original vs Plain-language),
- overlay-like behaviour (the same text displayed in several versions on the same screen).
4. Guidance and human review checklist
- Add a right-hand panel or a section below the text called "Human review checklist".
- This panel should:
- list 6–10 practical tips on how to write and verify plain-language and child-friendly legal texts, for example:
- "Check that no deadlines or appeal rights were removed."
- "Make sure names of institutions and contact points are still present."
- "Avoid promising rights that do not exist in law."
- "If you simplify, keep the structure of the original decision visible."
- "Have a lawyer and a non-lawyer both review the plain-language version."
- include a short note on risks:
- explain that AI-generated plain language is a starting point, not an official version,
- encourage users to adapt the text to local legal terms and cultural context.
- Optionally, include a small "Copy for review" button that copies the plain-language version together with the checklist to the clipboard so a human can paste it into their own document.
5. UX and technical requirements
- The app must be a self-contained single HTML file with inline CSS and JavaScript.
- All sample texts and transformed versions should be generated on the client side; no external API calls.
- Design the interface to be:
- readable on a laptop and projector,
- friendly for non-technical public officials, NGOs and diplomats (clear headings, large fonts, good contrast).
- Use simple, neutral styling; this is an educational and monitoring tool, not a marketing page.
Overall goal:
- Make this a practical workshop demo of how natural-language programming can help officials and NGOs:
- see the difference between legal language and plain language,
- think critically about what MUST NOT be lost in simplification,
- and imagine how they could adapt the tool to their own texts once they control the code.
You are assisting NGOs and legal aid organisations to think critically about AI in legal aid and workloads in justice systems.
Using your knowledge of A/80/169 "AI in judicial systems: promises and pitfalls" and broader UN guidance on access to justice, create a single-page web application called "Legal Aid & Workload Explorer" with the following features:
1. Task and context library
- Construct a list of concrete tasks in legal aid and justice administration that AI is already used for or is often proposed for, inspired by country examples in the UN report, such as:
- AsyLex's "Rights in exile" platform supporting asylum cases (Switzerland/global),
- Haqdarshak's extraction of welfare entitlements in India,
- OpenLawsNig and Podus helping low-income communities in Nigeria,
- Hear Me Out in Australia assisting with complaints,
- MiFILE and similar tools for self-represented litigants in jurisdictions like the United States,
- justice data analysis pilots in Colombia and Brazil.
- For each task, store:
- a short description,
- the country/countries where similar tools are deployed,
- typical context (e.g. high-volume administrative courts, asylum cases, rural legal aid),
- an illustrative, generic example inspired by situations discussed in A/80/169 and similar reports.
2. Assessment dimensions
- For each task, infer and store:
- potential benefits (time saved, cost, reach, better information, etc.),
- key human rights risks (e.g. impact on fair trial, confidentiality, non-discrimination, access to a human lawyer),
- a recommended oversight level:
- "Good candidate for carefully supervised automation",
- "Assistive only – human in the loop required",
- "Do not automate – should remain a human task".
3. Web interface
- Present tasks in a matrix or card view with:
- task name,
- country,
- short description,
- benefits tags,
- risk tags,
- oversight level (with colour coding).
- Filters to:
- show only low-risk candidates for automation,
- show tasks most relevant to high-volume, low-resource contexts,
- filter by country.
- Add a "Generate briefing" button that creates a short, structured summary:
- top 5 low-risk opportunities where AI could reduce workload,
- top 5 high-risk areas that should remain human-centred,
- mentioning the countries and contexts selected by the user.
4. UX
- Design for non-technical lawyers and NGO staff.
- Single HTML file with inline CSS and JavaScript, no external dependencies.
Keep all examples generic and training-oriented. Do not turn this into a scorecard of good or bad States; instead, frame it as a tool for asking better questions about AI in legal aid in each country.
🤝 AI-HUMAN COLLABORATION
_
□
X
AI as Teammate on a Jagged Frontier
In "The Cybernetic Teammate",
Ethan Mollick et al. reports experiments where people working with AI produced better work than people or AI alone.
The system behaves less like a calculator and more like an extra colleague.
Rule of thumb:
Use AI for first drafts, prototypes and exploring options.
Decide which steps must later become deterministic scripts.
Decide where human review is mandatory and who signs off.
The "Jagged Technological Frontier"
Mollick also describes a jagged technological frontier:
AI is excellent at some tasks and unreliable at others, even when they look similar. There is no single line where AI is simply "good enough".
Only domain experts in each field can map what is safe and useful in their own workflows.
Takeaway: AI literacy is about knowing when AI helps and when it must be constrained.
As human rights experts, we must experiment to discover where and how LLMs can be responsibly integrated into our work. No one will do it for us.
🎓 AI LITERACY FOR HUMAN RIGHTS WORK
_
□
X
From "AI Does It" to "AI in the Pipeline"
AI rarely "does the whole job". It sits inside a larger process you design and supervise.
Before using AI, ask yourself three questions:
Three questions before you start:
What is the goal? (e.g. create a database of human rights documents)
What steps need full human control? (definitions, final decisions, sensitive judgements)
Can a human verify the output? If yes, you can delegate more of that step to AI. If not, treat the output cautiously. Writing a script (even with LLMs) ensure deterministic, verifiable process and results.
How AI Can Behave in Your Pipeline
Scripts (deterministic)
Classic code. Same input → same output. Best for well-structured data and anything that must be auditable. You can write scripts with LLMs and leverage existing open source community resources — including pre-built scripts, datasets, and models — to accelerate development.
LLM prompting (probabilistic)
Very flexible content generation; more unpredictable (see how LLMs work). Good for drafts, prototypes, and brainstorming – not for authoritative outputs or final decisions.
AI Literacy
Task Design & Prompting
Clearly describe the task, the sources AI may use, and any red lines (what must not happen).
Data Foundations & Structuring
Where and how get the data; extract information from PDFs into simple, reusable formats (spreadsheets, databases).
Validation & Verifiability
Test AI tools on cases you already know; design simple checks and flags so you can see when something looks wrong.
Ethical & Human Rights Safeguards
Privacy, non-discrimination, transparency, meaningful human control.
Workflow & Governance
Decide where AI sits in your process, who signs off, and how decisions are documented.
Key point: These are learnable skills. You do not need to become a software engineer –
but you do need enough AI literacy to decide where AI fits safely into your human rights workflows.
🇮🇹 LESSONS FROM VENICE
_
□
X
What I Learned Teaching 16 Human Rights Professionals
Setting: 2-day workshop at Global Campus of Human Rights, Venice. Participants: 16 human rights professionals, most with legal backgrounds. Hand-outs:click here
1. The AI Exposure Gap Is Massive
Only 1 of 16 participants (6%) had used paid LLMs. Most were on free tiers, which creates a misleading impression of what AI can (or cannot) do.
2. Purpose Must Precede Process
Top feedback: "I didn't understand WHY we were building this." Teaching "how to build a search engine" without explaining "why this beats Ctrl+F" creates cognitive resistance. Start with the problem, not the solution.
3. The "Seeing It Work" Moment Is Critical
Multiple participants cited watching the app run as their breakthrough. For non-technical learners: working demos before explanations. Abstract concepts (APIs, servers) mean nothing until they see concrete output.
4. Bimodal Learning Outcomes
AI-assisted programming creates two groups: those who "get it" immediately (38%) and those who remain lost (25%). Little middle ground. AI tools don't reduce learning curves – they change their shape entirely.
5. Application Overwhelm > Concept Overwhelm
Participants weren't confused by programming – they were overwhelmed by tool-switching (Cursor → Colab, etc.). Each app switch = cognitive load. Lesson: friction reduction matters more than technical purity.
6. "Natural Language" Isn't Natural to Everyone
Prompting in English ≠ accessible. Programming is structured thinking. Natural language is ambiguous; code is precise. Students can struggle with computational logic, whether in Python or prompts.
Final thought: Teaching AI literacy to human rights professionals isn't about making them programmers.
It's about showing them what's possible and giving them the confidence to try.
Workshop by Łukasz Szoszkiewicz
Adam Mickiewicz University, Poznań
December 2025 • ITU Geneva • FNF Human Rights Hub
_ READY TO BUILD?
C:\GENEVA\SECRET_CONSOLE.EXE
You unlocked a hidden console.
That curiosity is exactly what you need
for working with AI and building tools
for human rights.
Try asking yourself:
"What else could we automate, visualize,
or analyse for our field?"