ECCHR Explore

January 2020
Legal TechMultimodal UIKnowledge CartographyRAGSemantic Traversal

A knowledge exploration tool commissioned by ECCHR that reimagined podcasts as interactive, navigable archives — shifting audio from passive content to an active, exploratory knowledge interface.

How do you navigate a conversation that unfolds across hours of audio? The European Center for Constitutional and Human Rights (ECCHR) commissioned this project to rethink their podcast series, ”Framing Human Rights.” The series uniquely explored the intersection of art and human rights, featuring rich dialogues between artists, activists, and legal thinkers. However, the podcast format itself trapped that richness within linear episodes, making the connections difficult to explore.

Failed to load mediasrc: https://kqdcjvdzirlg4kan.public.blob.vercel-storage.com/content/projects/2020-ecchr-explore/published/website/images/intro_animation.mp4
ECCHR Explore Entity tracking

The Challenge: Conversations Trapped in Time

Discourse, especially at the intersection of art and human rights law, isn’t linear; it loops, references, and builds connections across time and topics. The conversations in ECCHR’s podcast weren’t just sequential statements but dialogues weaving together artistic practice, legal strategy, and activism. Standard podcast players, however, flatten this complexity.

They present audio as isolated tracks, making it almost impossible to follow the threads of an idea or trace the relationships between concepts discussed episodes apart. The core challenge felt like a mismatch between the medium and the message: how could we reveal the inherent network structure of these conversations, freeing the knowledge from its linear container?

Our Approach: Mapping the Discourse

If the conversations formed a network, why not represent it that way? Our approach focused on revealing the hidden semantic network within the podcast series, moving beyond the episode-as-file metaphor to treat the series as a knowledge landscape and support more associative ways of thinking about the content. This involved two stages:

First, we built a mobile listening experience designed to surface connections dynamically. We transcribed the entire series, used NLP for entity extraction (tuned for legal/human rights terms), generated definitions with early transformer models (like GPT-2.5), and built a static knowledge graph mapping the relationships.

Then, as someone listened to an episode, the app would perform semantic searches over this pre-built graph in real-time, identifying key entities mentioned in the current segment and surfacing related audio chunks and definitions from other episodes. This system, an early form of Retrieval-Augmented Generation (RAG) built back in 2020, aimed to reduce the friction of discovery by showing connections precisely when they became relevant to the listener’s current context.

I explore more of this in a piece on auto-associative workspaces

Second, inspired by work on Metasphere, we prototyped a desktop interface based on principles of Cognitive Cartography. The idea was to visualize the knowledge graph spatially — not just showing links, but mapping conceptual clusters, visualizing relationships, and allowing users to navigate the discourse. Imagine zooming into a cluster of cases mentioned across several talks, or tracing a specific concept’s evolution through the series. This aimed for a kind of Semantic Traversal — moving through meaning rather than just files.

Failed to load mediasrc: images/desktop*.png
Mockup for the desktop version

The entity tracking and relationship mapping work here became a direct precursor to similar efforts in Tweetscape.

Outcome: Early Steps Towards Navigable Audio

The mobile experiment worked surprisingly well, demonstrating that real-time semantic retrieval based on the current listening context could genuinely aid understanding and discovery. It shifted listening from a purely passive act towards something more exploratory. A significant part of the value also came simply from the transcription process itself, making hours of previously opaque audio content searchable and indexable for the first time.

The spatial desktop interface, though ambitious, remained a prototype due to funding constraints. However, it served as a valuable proof-of-concept for applying spatialization techniques to complex discursive content, influencing how I thought about representing interconnected knowledge visually.

Looking back, this project felt like an early attempt at pushing back against the flattening effects of typical media formats. It highlighted the potential for AI (even the relatively simple NLP of 2020) to surface the latent structure within conversations. And turning unstructured audio into a queryable knowledge base foreshadowed a broader trend — one explored in the Ephemeral Interfaces series — where AI gets embedded into systems to make previously inaccessible data semantically available and interactive.

Team

RoleName
Project LeadJulian Fleck
Visual DesignRay Jacobs
Tech LeadMalte Müller
ResearchYashar Mansoori
Design ResearchRicardo Saavedra
Frontend DevelopmentFlorian Berg