╔══════════════════════════════════════════════════════════════╗ ║ THE EMPROMPTED ║ ║ AI-HUMAN COLLABORATION FOR HUMAN RIGHTS PROFESSIONALS ║ ║ ║ ║ ITU, GENEVA • DECEMBER 2025 ║ ╚══════════════════════════════════════════════════════════════╝
📁 WORKSHOP.EXE
_
X

Paradigm shift

In the 1980s, graphical interfaces meant almost anyone could use a computer.

Today, large language models mean almost anyone can ask a computer to help build tools, analyse documents or produce visualisations in plain English.

Vibecoding was named Collins Dictionary's Word of the Year 2025. The term, used by Andrej Karpathy, describes a style of programming where you explain what you want in ordinary language and an AI system writes the code.

Why It Matters for Human Rights Professionals

• You can sketch and test tools yourself, without waiting for a full IT project.
• You can turn your expertise and questions into small, concrete applications for (almost) free.
• If you would like to scale up, you can have more informed conversations with technical teams - you arrive with a prototype, not only an idea.
• You ensure human rights by design from the start, not as an afterthought.
• Join the open source AI community by exploring and contributing to the vast collection of models, datasets, and code on platforms like Hugging Face and GitHub.
🚀 YOUR STARTER PROMPT
_
X

Copy This Prompt

This prompt shows what it now means to "program" with natural language: you describe the tool; an AI system generates the code.

"Create a complete web application for searching
UN treaty body documents with the following functions:
1. Paragraph-Level Extraction
- Extract the first 20 paragraphs from the UN report A/80/169 on AI in judicial systems
- Each paragraph maintains its original numbering (1-20)
- Full text preservation with proper formatting

2. Thematic Labeling System
- Each paragraph has been categorized with relevant UN human rights framework themes - come up with 6 most relevant.
Display in the output why and how did you select and defined categories.

3. Advanced Search Functionality
- Keyword Search: Find individual terms (e.g., "information," "speech")
- Thematic Filtering: Filter by specific human rights themes
- Highlight Results: Search terms highlighted in yellow
- Real-time Search: Instant results as you type

4. User Interface
- Modern, responsive design similar to OHCHR style
- Live dashboard with stats about filtered documents - theme distribution, term frequency, and bigrams
- Glass morphism effects and professional appearance
- Color-coded theme tags for easy identification
- Paragraph numbering system for reference
- Intuitive copy text and citation function
- Mobile-friendly responsive layout"

Where to Use This

  • LM Arena – free access to several cutting-edge models
  • Claude Code – very strong for coding tasks

Instant result: a working prototype web app, built by you and an AI model together. See example (Sonnet 4.5)

Few iterations later: UN General Comments database | Polish Constitutional Court Explorer

🧯 ANTI-HYPE: Low-hanging fruits first

Vibe-coding shines for simple, useful tools: a document search engine, a lightweight analytics dashboard, or a short report generator. Human rights experts often miss exactly these basics: decision/document search, filtering, and XLSX export. Build step by step and shift towards your own code changes as you learn. Over-ambitious first projects often end like this:

Ambitious project goes wrong (funny meme)
“Ship the simple thing first, then grow.”
🧪 FIVE APPS FROM ONE REPORT
_
X

What You Can "Vibecode" from the UN report

Using the report of the UN Special Rapporteur on the independence of judges and lawyers A/80/169 – AI in judicial systems: promises and pitfalls as a reference point, you can describe practical monitoring tools in natural language and ask an AI model to generate prototypes.

Disclaimer – one-shot prototypes

Each app below was generated with a single, long one‑shot prompt: one detailed instruction asking an AI coding model to produce a complete prototype in one go. Each prompt refers to A/80/169.

These apps are demonstration tools only.

How to Use This Section

  • Choose an app that matches your work (e.g. safeguards, language, legal aid).
  • Click its tile to open a short description and the one‑shot prompt.
  • Enjoy Live Demo or code yourself by copying the prompt into LM Arena. Treat the result as a prototype to critique and improve.
🛡️
Safeguards Checklist
Governance and oversight
🗣️
Language & Accessibility
Plain language & digital divide
🤝
Legal Aid & Workload
What to automate (or not)
🗺️
AI Use Landscape
Where courts already use AI
⚖️
Risk & Rights Explorer
Link AI uses to rights
Select a tile above to open a small window with the description, workshop prompt and (where available) a live demo.

🗺️ AI Use Landscape in Judicial Systems

Map where and how AI is used in courts and justice systems, with each example linked to a country (e.g. Brazil's VICTOR, Mexico's Sor Juana, Nigerian access‑to‑justice tools).

🖥️ Open live demo
Use this to identify patterns and gaps in how AI is entering judicial systems.
⚖️ Risk & Rights Explorer

Explore how different AI tools in specific countries may affect fair trial, non‑discrimination, privacy and access to justice, based on situations inspired by A/80/169 and related reporting.

🖥️ Open live demo
Treat this as a way to structure human‑rights questions, not as an automated assessment.
🛡️ Safeguards & Governance Checklist Builder

Turn recommendations from A/80/169 and regional instruments into checklists for different AI uses (administrative tools, decision support, public‑facing tools, legal aid).

🖥️ Open live demo
Use the generated checklist to support dialogue with ministries, courts and oversight bodies.
🗣️ Language & Accessibility Lens

Focus on language and access: AI‑supported translation, plain‑language rulings, chat‑based legal first aid and who is excluded when tools assume connectivity, literacy or one language.

🖥️ Open live demo
Use this to keep the linguistic accessibility on the table in AI discussions.
🤝 AI-HUMAN COLLABORATION
_
X

AI as Teammate on a Jagged Frontier

In "The Cybernetic Teammate", Ethan Mollick et al. reports experiments where people working with AI produced better work than people or AI alone. The system behaves less like a calculator and more like an extra colleague.

Rule of thumb:
  • Use AI for first drafts, prototypes and exploring options.
  • Decide which steps must later become deterministic scripts.
  • Decide where human review is mandatory and who signs off.

The "Jagged Technological Frontier"

Mollick also describes a jagged technological frontier: AI is excellent at some tasks and unreliable at others, even when they look similar. There is no single line where AI is simply "good enough".

Only domain experts in each field can map what is safe and useful in their own workflows.

Takeaway: AI literacy is about knowing when AI helps and when it must be constrained. As human rights experts, we must experiment to discover where and how LLMs can be responsibly integrated into our work. No one will do it for us.
🎓 AI LITERACY FOR HUMAN RIGHTS WORK
_
X

From "AI Does It" to "AI in the Pipeline"

AI rarely "does the whole job". It sits inside a larger process you design and supervise. Before using AI, ask yourself three questions:

Three questions before you start:
  1. What is the goal? (e.g. create a database of human rights documents)
  2. What steps need full human control? (definitions, final decisions, sensitive judgements)
  3. Can a human verify the output? If yes, you can delegate more of that step to AI. If not, treat the output cautiously. Writing a script (even with LLMs) ensure deterministic, verifiable process and results.

How AI Can Behave in Your Pipeline

Scripts (deterministic)

Classic code. Same input → same output. Best for well-structured data and anything that must be auditable. You can write scripts with LLMs and leverage existing open source community resources — including pre-built scripts, datasets, and models — to accelerate development.

LLM prompting (probabilistic)

Very flexible content generation; more unpredictable (see how LLMs work). Good for drafts, prototypes, and brainstorming – not for authoritative outputs or final decisions.

AI Literacy

Task Design & Prompting

Clearly describe the task, the sources AI may use, and any red lines (what must not happen).

Data Foundations & Structuring

Where and how get the data; extract information from PDFs into simple, reusable formats (spreadsheets, databases).

Validation & Verifiability

Test AI tools on cases you already know; design simple checks and flags so you can see when something looks wrong.

Ethical & Human Rights Safeguards

Privacy, non-discrimination, transparency, meaningful human control.

Workflow & Governance

Decide where AI sits in your process, who signs off, and how decisions are documented.

Key point: These are learnable skills. You do not need to become a software engineer – but you do need enough AI literacy to decide where AI fits safely into your human rights workflows.
🇮🇹 LESSONS FROM VENICE
_
X

What I Learned Teaching 16 Human Rights Professionals

Setting: 2-day workshop at Global Campus of Human Rights, Venice.
Participants: 16 human rights professionals, most with legal backgrounds.
Hand-outs: click here

1. The AI Exposure Gap Is Massive

Only 1 of 16 participants (6%) had used paid LLMs. Most were on free tiers, which creates a misleading impression of what AI can (or cannot) do.

2. Purpose Must Precede Process

Top feedback: "I didn't understand WHY we were building this." Teaching "how to build a search engine" without explaining "why this beats Ctrl+F" creates cognitive resistance. Start with the problem, not the solution.

3. The "Seeing It Work" Moment Is Critical

Multiple participants cited watching the app run as their breakthrough. For non-technical learners: working demos before explanations. Abstract concepts (APIs, servers) mean nothing until they see concrete output.

4. Bimodal Learning Outcomes

AI-assisted programming creates two groups: those who "get it" immediately (38%) and those who remain lost (25%). Little middle ground. AI tools don't reduce learning curves – they change their shape entirely.

5. Application Overwhelm > Concept Overwhelm

Participants weren't confused by programming – they were overwhelmed by tool-switching (Cursor → Colab, etc.). Each app switch = cognitive load. Lesson: friction reduction matters more than technical purity.

6. "Natural Language" Isn't Natural to Everyone

Prompting in English ≠ accessible. Programming is structured thinking. Natural language is ambiguous; code is precise. Students can struggle with computational logic, whether in Python or prompts.

Final thought: Teaching AI literacy to human rights professionals isn't about making them programmers. It's about showing them what's possible and giving them the confidence to try.
📚 ESSENTIAL RESOURCES
_
X

Your Toolkit

Key Reading

Tools to Experiment With

Example Projects

═══════════════════════════════════════════

Workshop by Łukasz Szoszkiewicz
Adam Mickiewicz University, Poznań
December 2025 • ITU Geneva • FNF Human Rights Hub

_ READY TO BUILD?

C:\GENEVA\SECRET_CONSOLE.EXE
You unlocked a hidden console. That curiosity is exactly what you need for working with AI and building tools for human rights. Try asking yourself: "What else could we automate, visualize, or analyse for our field?"
14:30