Digital Twins: From AI Sensor Design to Real-Time Data Visualization

(Under Construction đźš§)

University of Cambridge 2022-2025

Digital twin (DT) technologies, driven by advances in sensors and CAD tools making BIM increasingly accessible, are transforming industries by enabling digital replicas of physical systems. With vast amounts of data generated by hundreds or thousands of deployed sensors in the built environment, interaction with end-users is more critical than ever. Yet, the term "digital twin" remains inconsistently defined, with unique interpretations in fields ranging from cardiac simulations to shipbuilding models.

Proposed DT classification frameworks — from 3D to 12D or more — often introduce unnecessary complexity or fail to address industry-specific needs. Effective DTs must be adaptable to their application context, supporting visualisations that are both functional and flexible. Despite this, evaluation—crucial for assessing usability and fitness for purpose—is rarely discussed.

This raises a crucial question: how can we ensure that our digital twin representations are fit for purpose?

To answer this, I have developed a technology stack that represents the tools and processes underpinning my PhD research. I call this the "PhD stack," as it captures the journey from sensor creation and deployment through to platform integration, real-time quantitative evaluation, data visualisation, and user testing. It also includes both qualitative and quantitative evaluations of visualisation effectiveness.

The stack

Data Visualisations

As part of my research, I created 20 unique DT data representations based on dense sensor deployments in the Computer Lab's Lecture Theatre 1. The goal was to evaluate which visualisations are best suited to the unique activities of different user groups.

Digital twin visualisations grid

System Architecture

Sys architecture

The system architecture integrates multiple components to support real-time data collection, processing, and visualisation. Sensors deployed in LT1 capture various environmental and occupancy metrics, which are then transmitted to the Adaptive City Platform (ACP) for processing. The ACP handles data aggregation, event detection, and serves as the backend for the digital twin visualisations.

The front-end visualisations are built using JavaScript/TypeScript and D3.js/Unity to ensure accessibility and responsiveness.

Research Questions

How can we design digital twins for real-time tracking and event detection?

Digital twin view of a person walking in a corridor (static frame)
Animated corridor-walking sequence used for event detection in the digital twin

Real-time tracking and event detection require both low-latency data pipelines and carefully designed sensors that balance granularity with privacy. In this research, the Adaptive City Platform (ACP) processed sensor inputs with sub-100 ms latency, enabling live synchronisation between physical events and their digital representations. For example, motion sensors and environmental readings on our DT testbed were visualised as animated sequences of expanding circles representing water droplets creating ripples through corridors. These real-time views allowed the system to detect events—such as unusual occupancy patterns or unexpected interruptions—and present them back to users in a way that was both immediate and interpretable. Designing for real-time therefore combines technical infrastructure (edge computing, dense sensing) with user-facing representations that make detected events intelligible rather than overwhelming.

What is the appropriate level of visual fidelity for digital-twin representations?

Comparison of multiple visual-fidelity levels in digital-twin visualisations

The right level of visual fidelity depends on the use case and the timeliness requirements. High-fidelity 3D models or immersive environments can be valuable for exploration, communication, or stakeholder engagement, but they are not always necessary—and can even be counterproductive—for time-critical applications. User studies in this research showed that low-fidelity representations (such as simplified glyphs or abstract visualisations) were more effective for rapid monitoring and decision-making, particularly when events unfolded in real time. Conversely, higher fidelity was more appropriate when the goal was to understand context, communicate findings, or simulate long-term building performance. The Digital Twin Taxonomy DiTTo formalises this trade-off, positioning fidelity alongside timeliness and aggregation as key design dimensions.

Can we predict design criteria for digital twins based on the activities they will support?

Preliminary results indicate that some visualisations are effective for activities beyond their original design intent, highlighting the adaptability and potential of these approaches. Blackwell's Patterns of UX (PUX) framework was used to undertake a structured analysis of user activities and map them to suitable digital twin representations.

Predictions

While I am actively answering these questions, feel free to explore the following:




justas brazauskas