Jehanne Dussert speaking
GenAI Tech Lead · Paris (open to relocation) · French Tax Administration (DGFiP)

AI Governance
by Design

Jehanne Dussert

Deploying and monitoring LLM-based systems at national scale (95K users), at the intersection of software engineering, observability, and institutional governance.

{ }École 42Software engineering
×
§IT LawCyberjustice · Master's
95Kpublic sector users
50+use cases evaluated
40+contributors coordinated
3EU mandates
00

Approach

Before deployment: intuition, risk modelling, regulatory mapping. After: production metrics, drift detection, real user behaviour. Governance is never finished — it's a feedback loop, not a deliverable.

Treating governance as a dynamic tool means the framework evolves with the system it governs: acceptance thresholds adjust, supervision levels shift, authorised use cases get refined as the production reality becomes clearer.

Concrete example — governance-driven routing

A summarisation task configured with confidentiality and traceability criteria automatically routes to a different model than the same task with relaxed constraints. The governance profile is the routing logic — no manual override, no separate config.

Implemented in the side project: use case × criteria matrix stored in Redis, resolved at inference time.

Before go-liveIntuition · risk modelGovernance profilecriteria · thresholdssupervision levelRouteruse case ×criteria matrixqwen2.5:1.5blow risk · fastdeepseek-r1traceability · auditgemma3:1bprivacy · local onlyAfter go-livemetrics refine criteriagovernance profile = routing logic
01

Positioning

Deployment

LLM in production

API integration, open-source model benchmarking, prompt engineering, A/B testing — all translated into architecture decisions.

  • TTFT & latency thresholds
  • Acceptance criteria with data scientists
Observability

Monitoring → Governance

Grafana / Prometheus / Loki stack. Production metrics don't stop at the dashboard — they feed compliance posture and evolve the governance.

  • Uptime, errors, user signals
  • Metrics as governance inputs
Governance

Frameworks that hold

National governance, risk taxonomies, supervision models — built with 40+ cross-functional contributors and grounded in production reality.

  • AI Act, ANSSI PA-102, GDPR
  • 3-level supervision model
02

Experience

2024 – PresentFrench Tax Administration (DGFiP)

Tech Lead – Generative AI & AI Governance Coordinator

  • Authored national AI governance deployed across 11 directorates, covering 50+ evaluated use cases — 3-level supervision model, risk taxonomy, tool and data tiers.
  • Coordinated 40+ cross-functional contributors (legal, security, compliance, engineering) to produce governance frameworks that hold in production.
  • Deployed internal GenAI assistants serving 95,000+ civil servants; integrated open-source models and designed prompt systems for operational workflows.
  • Built monitoring and alerting stack (Grafana/Prometheus): uptime, latency, TTFT, errors, user signals.
  • Benchmarked models, set acceptance thresholds with data scientists, ran A/B tests and tuning cycles.
  • Represent France in EU Fiscalis working group on GenAI integration.
ImpactStandardised safe GenAI rollout across 11 divisions, directly influencing executive deployment decisions.
2025 – PresentEuropean Commission

EU Horizon — Expert Evaluator

  • Assessed AI research proposals submitted to the European Commission's Horizon Europe programme.
  • Evaluation criteria: technical feasibility, risk assessment, societal impact, and regulatory alignment.
  • Selected from a pool of independent experts for proposals in the field of trustworthy AI.
ImpactContributing to the quality and safety standards of publicly funded AI research across Europe.
2024Council of Europe

AI Advisory Board — Appointed Member (1 of 5)

  • One of five experts appointed to the CEPEJ AI Advisory Board on AI in justice systems.
  • Co-authored the 1st AIAB Report on the Use of Artificial Intelligence in the Judiciary.
  • Addressed transparency, accountability, and human oversight requirements for AI tools used in courts across member states.
ImpactShaped Council of Europe guidance on AI governance in judicial contexts.
2023 – 2024Interministerial Digital Directorate (DINUM)

Generative AI Product Engineer

  • Led frontend implementation of AI-assisted workflows.
  • Proposed product features based on multi-ministry feedback.
  • Presented tool capabilities and limitations in interministerial settings.
ImpactImproved adoption readiness of the French government's shared GenAI platform (Albert), deployed across ministries.
2022 – 2023Interministerial Digital Directorate (DINUM) & Ministry of Interior

Government Innovation Fellow – Unreal Engine C++ Developer

  • Developed synthetic data generator in high-security institutional context.
  • Led cross-functional team (2 developers, 1 designer).
  • Designed governance framework for synthetic data generation: use cases, co-decision methodology, and data extraction protocol for the Paris digital twin (3D semantic segmentation).
ImpactDelivered a data generation tool for public administration needs.
View on GitHub →
03

govllm

Open source researchRegulated sectors

LLM Governance Monitoring Platform

How do you justify a model choice six months after go-live? govllm is my attempt to answer that — a self-hosted governance monitoring stack built for regulated environments, aligned to EU AI Act, GDPR, and ANSSI.

  • Governance-driven model routing — each use case resolves to a model via its configured profile
  • Local SLMs via Ollama — data sovereignty by design
View on GitHub

Stack

FastAPIRedis pub/subLiteLLMLangfusePrometheusGrafanaOllamaVue 3TypeScriptDocker Compose
governance profile
model × use case matrixAI Act
live
General
0.72
Translation
0.41
Code
0.88
transparencytraceabilityhuman oversightrisk documentationaccuracy

→ AI Act Art. 9 — ongoing risk management required

04

Skills

AI Governance

  • Governance authoring
  • Risk taxonomy
  • AI Act
  • ANSSI PA-102
  • GDPR
  • Cross-functional coordination

LLM Deployment

  • LLM API integration
  • Model benchmarking
  • Prompt engineering
  • A/B testing
  • TTFT / latency

Observability

  • Grafana
  • Prometheus
  • Loki
  • Alerting
  • Metrics → governance

Engineering

  • Python
  • FastAPI
  • PostgreSQL
  • Vue.js
  • TypeScript
  • Docker
  • Redis
05

Contributions

2026United Nations

UN Global Dialogue on AI Governance — Written Submission

Technical Community Stakeholder · Geneva, July 6–7 2026

2024Council of Europe

1st AIAB Report on the Use of AI in the Judiciary

AI Advisory Board · Appointed member (1 of 5) · CEPEJ

2024Vuibert

Co-author — Public Sector & Digital Transformation

200 fact sheets, diagrams and videos to develop digital literacy in the public sector

2024Flaash · N°04

Fiction — AI & the Judiciary

Cultural & technical foresight journal · Autumn 2024

2019Cyberjustice Laboratory

Inventory of Best Practices — Algorithmic Systems in Public Services

University of Montreal · Master's in Cyberjustice

06

Contact

Let's work
together

Available for projects at the intersection of AI engineering and governance — a position, a mission, an advisory role, or a collaboration.

Open to

Roles combining engineering & AI governanceFreelance missions & advisoryTalks & conference appearancesResearch & open-source collaboration